What the metaverse means for you and your customers

Once again, Aarron Spinley SVP of the newly acquired Thunderhead (by Medallia), provides the kind of thought leadership that has some real meaning. He’s a paradigm for executives on how to be an internal thought leader who has an external impact. This piece is particularly germane since we are all at the stage of deciding whether the metaverse is nothing more than a. Facebook trying to save its own butt or b. the latest fast riser in the hype cycle (Remember Clubhouse?) or c. has some real substance that has to be accounted for in corporate/business planning in multiple ways? Me? I’m sticking to the DC Comics Multiverse. There, all the alternate universes have been fully resolved into a single universe. There’s something to be said for that.  Take it away, Aarron. Four Key Principles for Executives and Marketers  A lot of people who are watching the arrival of new terms like web3 and the metaverse have realised that this suggests yet another shift. Maybe even a big one.  But like many things in life, it is not the thing itself, but the effects of the thing that we should focus on. That, and the repeatable, and sustainable, management of the thing. Once we understand that, anxiety fades. So, it is useful to understand that the metaverse is a bit like the second coming of technological capabilities that we have long recognised. In essence, it is the convergence of things like artificial reality and virtual reality to, perhaps finally, realise the intersection of the physical and digital realms that “Industry 4.0” inherently promised. Why is the metaverse only just now emerging? Because other enabling technologies – better cloud computing, broadband access, virtual currencies, collaboration tools et al – weren’t ready when metaverse-related tech first arrived.  In short, the Metaverse represents the start of the shift from a 2D internet world, to a 3D one. Or so the marketing slogan goes. But while it is something old, it is also something new. Underestimate it at your peril. Most are expecting it to spawn whole new industries, to revolutionise commerce, the “creator economy” and, of course, the very nature of communication and collaboration both personally, and professionally. Ergo: Marketers and other customer professionals will have an opportunity to think differently — at least in execution — about how to interact with customers in completely new ways. Although, thinking differently, has often proved a stretch too far in an industry that chows down on buzzwords and loves to follow the pack. Here’s an example. Instead of calling a contact centre for a phone conversation, you or your avatar for that matter might be sitting in a booth with the avatar of the service agent talking through your problem.  What are the ripple effects of that type of interaction? How would we consume that, relate to it, process it emotionally, or react to different stimuli within that context? There are a million use cases and ideas, but the really big question is this: How do we establish principles that allow us to harness the opportunities repeatably, and how do we do that safely? Hopefully, this helps. The Metaverse is a Service Layer Issue In my estimation, and depending on the category, the service layer accounts for over 90–98% of the interactions that any brand has with its customers. From the car park to the website, to the contact centre to the mobile app, and into a store experience (if you have one), these are all “services.”     That means if we know what we are doing, our goal for all of them is to be as low-friction as possible, and utterly unmemorable. Yes, I know. Our industry is infatuated with the word, “experience.” But most of the time, those using the term are referring to service interactions because, in truth, much of the industry has no clue what the difference is, and has co-opted the word ‘experience’ to mean, well, everything. So be it.  If you’re not clear on the demarcation between services and experiences, read this.  But to understand where the metaverse will really move the needle for brands, and how to think about it, the distinction is important.   Aarron Spinley’s” Engagement Stack” Now, there is a multitude of exotic ideas for brand activations and the like using the metaverse. These are clearly experiences. Equally, we should expect that some services when delivered through the metaverse will, for a short time, be also very experiential (memorable) in nature due to the novelty factor. But as the metaverse normalises this won’t sustain. So most opportunities for your company, simply because of the way an engagement stack works (see above) will present themselves in the service layer. And that means that you have to get your head around 2 key things: Journeys, and Choice. Dictation is Dead The mistake so many companies make is to apply old ideas to new surroundings. Case in point, many make the critical error of using the populist but dated idea of journey mapping in today’s world. They may not know it, because “engagement-literacy” is so low, but it fails abysmally. And it will be a royal cluster you-know-what in the metaverse. As a technique, and even as an evolving toolkit, this was an important approach to better understand customers for a fair while. Although the more accurate summation of the practice, its real intention if you like, was always to dictate their resulting buying journey — not to understand their journey in of itself. In the main, this worked, until somewhere around 2010–2014, depending on your view.  You see, when journey mapping was popularised, it was off the back of work by Colin Shaw in 2002, who had originally coined it “moment mapping.” But in 2002 — a full twenty years ago as I write this — most brands were managing an average of just two or three channels.  What followed was the explosion of the Internet, mobile, cloud, social media, and device proliferation, such that by the close of 2019, we found ourselves dealing with up to 100 channels

What the metaverse means for you and your customers Read More »

Systemic Gaps And Geopolitical Tensions Define Europe’s Cybersecurity Threats In 2024

European businesses, much like their global counterparts, are caught in a delicate dance, with CISOs coping with sector-specific vulnerabilities, a regulatory maze, and geopolitical complexity. Forrester’s report, European Cybersecurity Threats, 2024, offers European security leaders some much-needed clarity. Security Fundamentals Matter More Than Security Theater Technology and security professionals often find themselves captivated by the allure of the exotic, shiny technology toy. While exploring new innovations and attack mechanisms might seem to keep you ahead of the curve, don’t forget that true strength lies in a solid foundation. Most cyberattacks stem from neglecting the fundamentals. Patch management, endpoint detection and response, vulnerability scanning, and asset management are the cornerstones of any robust security effort, regardless of how alluring other “a la mode” topics may be. Security pros will see the following trends this year: Operational technology (OT) security needs to move from PowerPoint plans to implementation. European cyberthreats continue to evolve, with nation-state actors deploying advanced, persistent threats to infiltrate critical infrastructure, government networks, and private-sector systems. Critical sectors such as energy, telecommunications, healthcare, and defense are prime targets for cyber-espionage groups and need to level up their security, as “planning” for OT is not sufficient — you need to execute now, as threat actors have gone beyond “planning,” “roadmaps,” and “visioning sessions.” With the NIS2 Directive casting the net even further, regulators will also start asking difficult questions about OT security. Implementing threat hunting and leveling up contingency planning are required in order to get ahead. Cyberattacks are inevitable, and while it is important to have extensive detection capabilities, organizations also need to plan for system failures. As threat actors innovate and find new ways to evade detection systems, security leaders need to invest in threat hunting capabilities to proactively identify embedded actors and containment capabilities to minimize impact. European security leaders should test contingency plans to respond to regulatory demands for resilience and rapid recovery, given the rocket boosters provided by NIS2. Personal data theft has seen a resurgence. Incidents involving personal data theft increased in the last year within European organizations, consistent with global trends. A rise in exotic social engineering techniques such as deepfake audio means that it is only a matter of time before fine-tuned AI models designed to mimic specific individuals are applied to social engineering attacks. The data used to train these models will come from your organization if you do not secure personal data beyond compliance. Forrester provides practical guidance on how to deal with the threats introduced by emerging technology. The report goes into the threats that European security leaders face and how they can deftly address them today while anticipating the intrigues of tomorrow. Forrester clients can book a guidance session with one of us and can read the complete report here. source

Systemic Gaps And Geopolitical Tensions Define Europe’s Cybersecurity Threats In 2024 Read More »

Shaping the Future of Healthcare Workflow Processes and Advancing Life Sciences Innovation

The Asia/Pacific healthcare sector is at a pivotal moment, as the industry is transitioning from just ‘care anywhere’ to ‘AI everywhere in care management’ phase. This essential transition is supported and driven by the focus on robust clinical data sets and evolving connected care ecosystem. To efficiently manage the care delivery process, care providers are prioritizing enhancing clinical, operational and administrative workflow productivity. As a result, GenAI has emerged as a transformative force, with a huge potential to revolutionize the future of healthcare workflow processes and care delivery system. Dedicated Budget for Generative AI, Initial Investments Focused on PoCs IDC’s recent survey shows that in Asia/Pacific over half of healthcare and life science organizations are planning to have a dedicated budget for GenAI. IDC data also shows that almost 40% of healthcare providers and nearly a third of life sciences firms in the region are currently focused on proofs of concept (POCs) of GenAI models as part of testing tools and solutions. This trend is mainly owed to the nascent stage of tech adoption and to the testing of the scalability and partnership ecosystem by healthcare organizations. GenAI adoption is set to create a positive impact on clinician experience, patient engagement, and workflow efficiency management for healthcare provider organizations. In life sciences, the initial impact will be mostly on drug discovery and design, enhanced patient engagement, streamlined clinical trials, and patient safety. Data is at the core of AI, and this is driving healthcare organizations to increase their focus on EHRs and investments on cloud adoption to make their digital infrastructure and data platforms more robust and secure. Evolving Partner Ecosystem Most healthcare and life sciences organizations are of the opinion that GenAI models that leverage their own business data will give them a significant advantage over competitors. Those with mature and secure clinical data foundations, such as multi-cloud and hybrid cloud architectures are better positioned to take full advantage of GenAI at a faster pace. At the same time, the focus on data security has increased as the healthcare sector faces intense threats through cyberattacks. As a result, the GenAI ecosystem involves IT SPs, security and platform providers, and hyper-scalers in solution deployments. Time to Align GenAI Use Cases for Healthcare Organizations As the demand for GenAI intensifies, healthcare and life science organizations in the region are shifting their focus towards healthcare-specific GenAI use cases. To accommodate this demand, IDC recently released a study documenting GenAI use cases in the healthcare provider segment GenAI Use Case Taxonomy, 2024: The Healthcare Industry, and life sciences sector GenAI Use Case Taxonomy, 2024: The Life Sciences Industry, addressing business impact, metrics, and data modality for each use case. Current GenAI case study deployments in the Asia/Pacific region predominantly address clinical documentation and summarization, content creation for clinician-to-patient communication, personalization of patients/consumer experiences, patient engagement, and drug discovery and design. These use cases are set to accelerate personalization, care experience, and workflow productivity. Challenges on the Journey to GenAI Adoption The journey to GenAI adoption is not without risks. Regulatory risks and higher infrastructure costs are the limiting factors for GenAI adoption among healthcare and life sciences organizations in the region. Organizations will carefully consider tech partnerships and align use cases to implement GenAI solutions effectively and safely. IDC data shows that healthcare CIOs expect a software provider to have robust data security, seamless and intuitive AI models to work with, and GenAI models that support a broad range of content, both structured and unstructured Road Ahead Moving ahead, creating a robust clinical data foundation, aligning GenAI use cases between organizational priorities with that of the tech providers’ offerings, and exploring flexible pricing options, along with trust and transparency in the solution engagement would ensure a smooth transition of GenAI use case deployments from PoCs into production. GenAI adoption in healthcare and life sciences, though at its nascent stage, is set to have a significant impact on enhancing clinician efficiency, improving workflow productivity, and hyper-personalization of patient experience. Currently, there is increased priority towards POCs as part of GenAI model deployments but this is set to transition to full-fledged deployments supported by matured clinical data sets, regulatory support, enhanced skill sets, clinician buy-in, and alignment of GenAI use cases with organizational priorities. For additional reference, please access IDC Health Insight’s report GenAI in Healthcare and Life Sciences: Current Trends and Future Potential in Asia/Pacific. Learn what matters most to your customers with IDC’s AI Use Case Discovery Tool—find out more. source

Shaping the Future of Healthcare Workflow Processes and Advancing Life Sciences Innovation Read More »

Nvidia just dropped a bombshell: Its new AI model is open, massive, and ready to rival GPT-4

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia has released a powerful open-source artificial intelligence model that competes with proprietary systems from industry leaders like OpenAI and Google. The company’s new NVLM 1.0 family of large multimodal language models, led by the 72 billion parameter NVLM-D-72B, demonstrates exceptional performance across vision and language tasks while also enhancing text-only capabilities. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers explain in their paper. By making the model weights publicly available and promising to release the training code, Nvidia breaks from the trend of keeping advanced AI systems closed. This decision grants researchers and developers unprecedented access to cutting-edge technology. Benchmark results comparing NVIDIA’s NVLM-D model to AI giants like GPT-4, Claude 3.5, and Llama 3-V, showing NVLM-D’s competitive performance across various visual and language tasks. (Credit: arxiv.org) NVLM-D-72B: A versatile performer in visual and textual tasks The NVLM-D-72B model shows impressive adaptability in processing complex visual and textual inputs. Researchers provided examples that highlight the model’s ability to interpret memes, analyze images, and solve mathematical problems step-by-step. Notably, NVLM-D-72B improves its performance on text-only tasks after multimodal training. While many similar models see a decline in text performance, NVLM-D-72B increased its accuracy by an average of 4.3 points across key text benchmarks. “Our NVLM-D-1.0-72B demonstrates significant improvements over its text backbone on text-only math and coding benchmarks,” the researchers note, emphasizing a key advantage of their approach. NVIDIA’s new AI model analyzes a meme comparing academic abstracts to full papers, demonstrating its ability to interpret visual humor and scholarly concepts. (Credit: arxiv.org) AI researchers respond to Nvidia’s open-source initiative The AI community has reacted positively to the release. One AI researcher commenting on social media, observed, “Wow! Nvidia just published a 72B model with is ~on par with llama 3.1 405B in math and coding evals and also has vision ?” Nvidia’s decision to make such a powerful model openly available could accelerate AI research and development across the field. By providing access to a model that rivals proprietary systems from well-funded tech companies, Nvidia may enable smaller organizations and independent researchers to contribute more significantly to AI advancements. The NVLM project also introduces innovative architectural designs, including a hybrid approach that combines different multimodal processing techniques. This development could shape the direction of future research in the field. NVLM 1.0: A new chapter in open-source AI development Nvidia’s release of NVLM 1.0 marks a pivotal moment in AI development. By open-sourcing a model that rivals proprietary giants, Nvidia isn’t just sharing code—it’s challenging the very structure of the AI industry. This move could spark a chain reaction. Other tech leaders may feel pressure to open their research, potentially accelerating AI progress across the board. It also levels the playing field, allowing smaller teams and researchers to innovate with tools once reserved for tech giants. However, NVLM 1.0’s release isn’t without risks. As powerful AI becomes more accessible, concerns about misuse and ethical implications will likely grow. The AI community now faces the complex task of promoting innovation while establishing guardrails for responsible use. Nvidia’s decision also raises questions about the future of AI business models. If state-of-the-art models become freely available, companies may need to rethink how they create value and maintain competitive edges in AI. The true impact of NVLM 1.0 will unfold in the coming months and years. It could usher in an era of unprecedented collaboration and innovation in AI. Or, it might force a reckoning with the unintended consequences of widely available, advanced AI. One thing is certain: Nvidia has fired a shot across the bow of the AI industry. The question now is not if the landscape will change, but how dramatically—and who will adapt fast enough to thrive in this new world of open AI. source

Nvidia just dropped a bombshell: Its new AI model is open, massive, and ready to rival GPT-4 Read More »

How Accurate Were Our Predictions For 2024?

Each year, the Forrester research team gathers to revisit the predictions we made the previous fall. It is, of course, a thrill to see what we’ve gotten right. But when you take the sorts of bold swings that we do every year, you inevitably sometimes miss. As we prepare to launch our 2025 predictions (sign up here to be alerted once they’re live), here’s a look at how some of our predictions from a year ago played out. The Hits The prediction: Legal action over privacy protections will strike a major data provider. The reality: This was a big hit, as not just one but two big companies recently settled claims related to sharing personal information with other firms. First, ZoomInfo agreed to pay nearly $30 million to resolve lawsuits claiming that it had used people’s names and job-related details without their consent to promote its subscription service. Soon after, Oracle agreed to pay $115 million to settle accusations of creating “digital dossiers” with people’s personal information and selling them to marketers. Prior to this year, B2B companies hadn’t been the targets of privacy litigation or regulatory actions of the type levied against consumer brands. But as more B2B data providers shift focus from firmographics to targeting individuals, their risk increases — so instituting safeguards such as stringent consent and preference management is critical. The prediction: Sixty percent of employees will get prompt engineering training. The reality: In Forrester’s Digital Workplace And Employee Technology Survey, 2024, 61% of global information workers indicated that they’ve been through some degree of training on how to use AI for work, while 68% indicated knowledge of prompt engineering and how to use it. Since prompt engineering is the key to unlocking generative AI’s benefits, we were glad to see this prediction materialize. The prediction: The number of green consumers will stay constant, despite climate chaos and backlash. The reality: Despite evidence of accelerated weather-related disasters and a desire among many consumers to act sustainably, we predicted that inflation and the cost of living would heavily influence buying decisions. New Forrester data bears this out, showing that the share of “Active Green” consumers — those who pay close attention to companies’ impact on the environment and overwhelmingly choose eco-friendly items over low-cost or convenient ones — has held close to steady (it’s down 1–2 percentage points in the markets we surveyed). Though Active Greens are a relatively small segment — fewer than one in five consumers fall into this category — our advice to companies is to recognize the cognitive dissonance that many feel when making purchases. Now is the time to innovate to deliver more environmental value for the same price, particularly as environmentally conscious younger consumers gain buying power. The Misses The prediction: Enterprise AI initiatives will boost productivity and creative problem-solving by 50%. The reality: We let our bullishness on AI cloud our judgment on this one. While some processes and teams are seeing benefits from AI-powered applications, its impact right now is nowhere near 50%. We firmly believe that AI and particularly generative AI will be a game-changer — and the idea that people would use these tools to free up headspace and creativity wasn’t wrong. But change on the scale that we predicted will take longer to unfold. The prediction: Media titans will propel in-game advertising into the fastest-growing media channel. The reality: Microsoft’s acquisition of Activision Blizzard last year and Sony’s rumored acquisition of Take-Two didn’t fuel the surge in in-game advertising that we expected. Despite gaming’s continued huge popularity, spending on in-game ads actually dipped in 2024 and now comprises less than 3% of digital ad spending. Concerns over ease of buying and planning in-game ads have held advertisers back thus far. The Jury’s Still Out The prediction: Generative AI will surface insights that dictate one in five new B2B product launches. The reality: We expected product teams to use generative AI (genAI) for ideation and innovation, as it can help them rapidly sift through customer data for insights. That seems to be happening: One-quarter of the product management decision-makers we recently surveyed said that their organizations use genAI to identify product opportunities. Though we can’t definitively say right now that genAI is dictating product launches, conversations with our clients suggest that the momentum is swinging in this direction. What’s In Store For 2025? Stay tuned for our predictions for the coming year, which will span topics including automation, the future of work, customer experience, business buyers, and many more. (If you’re a Forrester client, check out our newly published artificial intelligence and tech leadership Predictions reports.) We look forward to making and publishing these bold calls each year to help you anticipate what’s ahead. Sign up to get alerted as soon as Predictions 2025 goes live. source

How Accurate Were Our Predictions For 2024? Read More »

AI Helps Field Service Focus on Customer Value

When something breaks in your home, your office, or in a venue you frequent what is your expectation? Is it that you will just deal with having a broken product, offline printer, or down elevator? Most of us would expect a service technician will show up to the rescue to return the given product or asset to operations so we can get back to productivity. In IDC’s recent Product Innovation and Aftermarket Services Survey, service leaders noted a priority to improve service (quality and speed) to customers. But too often, aftermarket service organizations have focused on just ensuring a warm body arrives on a customer site within the service level agreement (SLA) with little importance put on actually achieving resolution or enhancing the customer experience. As customers explore options for the services they receive, aftermarket service providers will need to get better at delivering more than just the minimum to enable the field service team to become experts on engaging a customer in a special and personalized way. Field service and the aftermarket are too often driven by meeting a SLA. This minimum requirement of meeting a service window of 4-8 hours after a failure has been reported, or processing a warranty claim within 30 days, or ensuring an asset is available 80% of the time has long been the norm. Meeting minimal requirements is quite profitable for the service organization, but can be short-sighted as competitors enter the market and begin to offer service, support, and enhanced experiences of the same or better quality. To address this pending disruption of competitive factions and heightened customer expectations, field service organizations will need to prioritize value and not just meeting an SLA. This will raise the cost to serve in the short term but in turn result in having the right to request more share of customer wallet as value delivered improves for the customer or operator. This shift to value and enhanced/personalized experiences will ultimately require better quality data, contextualized customer insights, and freed up time to focus on delivering value. Artificial intelligence (AI) provides an opportunity to close the gap between data and insights on the front line. IDC defines AI as the ability of computers to learn without being programmed, applied to large sets of data for business advantage. But how should field service organizations reconcile the hype around AI to usher in the era of intelligence at the point of service? Field service organizations should prioritize the following as they explore the potential of AI in the coming weeks, months, and years: Understand the pulse of your employees and customers. Voice of customer and voice of employee activities often are established for the primary benefit of the organization (i.e., increase sales/margins, increase retention rates). In this new era of AI, field service organizations will need to listen to the needs and concerns of customers and employees. As AI becomes more pervasive across industries, field service organizations must tackle the elephant in the room around AI – privacy, and job displacement. Too much of the discussion around AI in the B2B world has been the fear that it will replace jobs or result in IP theft. This view of potential negatives neglects to amplify all of the potential positive outcomes of what AI can offer, Educating customers and service employees about the value of AI and how these technological capabilities can improve the service experience, customer outcomes, and employee productivity is crucial to adoption and comfort. Without understanding customer and service employees’ fears about AI, organizations will struggle to maximize the opportunities that will come with this innovative technological advancement.   Shift the KPI that measure success in the field. The promise of AI in field service revolves around improved operational efficiency, predictive/prescriptive service outcomes, and improved productivity of the team. However, there is a bit of a gap between the current metrics that are being measured and what should be measured in the AI era. If AI is to improve the speed of service, technicians should be measured on the value they are providing to the customer and not on how many more jobs they can complete. The improved speed of issue resolution as a result of AI providing better answers to the reason for failure should allow the humans on the service team to focus on the customer. This shift in what role a field service technician can play in customer outcomes is profound, no longer is the technician solely in place to turn a wrench but to prioritize customer engagement. Therefore, the KPI that matter aren’t work orders closed in a given day but experiential and value based. These new metrics may be more difficult to measure but will tell a better story of customer impact, future revenue opportunities, and lifetime value. Highlight the positive and address the (potential) negatives. Right now, there are too many field service technicians that can efficiently get on site in front of a customer or asset but fail to resolve the issue on a first visit. Issue resolution is becoming more and more complex as assets are smarter, supply chain networks struggle with resiliency, and the field force ages out. The ability to have the right part, right skills, right insights, at the right time is becoming a fairy tale for too many service organizations. On the front line is the field service engineer who has to advise a customer or operator in need that service cannot be completed resulting in assets, products, and equipment remaining down. Service leaders must communicate to the field service team both the in office planning/dispatching teams and the engineers in the field the ability for AI to drive insights and efficiency while reducing non-value added task work. The skepticism of technology from service teams has preceded the AI era, but the AI conversation brings with it the fear of machines taking over to the detriment of the humans. However, fear comes from a lack of communication, visibility, and buy-in around strategy and

AI Helps Field Service Focus on Customer Value Read More »

The Secret To High Growth: Cocreate A Customer-Obsessed Strategy

In the competitive B2B landscape, above-average growth is a common ambition, yet many businesses fail to achieve this objective. According to Forrester’s State Of Customer Obsession Survey, 2024, 83% of B2B decision-makers see meeting commercial growth targets as a top priority, but only 24% are outpacing their industry’s average growth. The underlying issue? A misalignment in growth strategies that are too often aspirational, internally focused, and not attuned to customer needs. High-growth companies distinguish themselves by being more likely to have stronger growth strategy alignment between marketing, sales, and product functions. These companies also put customer needs at the center of their leadership, strategy, and operations. Don’t Expect A Subpar Strategy To Yield Above-Average Growth As Yogi Berra, the baseball legend known for his malapropisms, once quipped, “If you don’t know where you’re going, you’ll end up someplace else.” Simultaneously driving growth, reducing costs (or at least being more efficient), and improving customer experience at the same time is a tall order for any B2B organization. The answer lies in cocreating a customer-obsessed growth strategy across marketing, sales, and product. Key Foundations To Develop An Effective Growth Strategy Four foundational elements illuminate the path forward for B2B leaders: Move beyond aspirations. High-level goals such as “grow revenue by 30%” or “become the industry leader” are not strategies but aspirations. Effective strategy development must deconstruct these goals into concrete decisions, objectives, priorities, and initiatives. Align executive assumptions and decisions. Successful strategy execution requires many subordinate decisions and actions. A well-articulated growth strategy codifies executive assumptions and decisions, accelerating downstream decision-making and improving the quality and alignment of strategy execution. Pivot to customer obsession. The increasingly digital-savvy, self-directed buyer and continued market turbulence have exposed a gap in traditional B2B growth strategies. Prioritizing customer problems, needs, and expectations is crucial. Forrester’s research finds that customer-obsessed companies have significantly higher revenue growth, profitability growth, customer retention, and employee engagement. Make tough calls about focus. Resources are finite. Making exclusions explicit is key to curtailing drift in execution. A well-formed growth strategy makes what’s in scope and what’s out of scope explicit, ensuring no ambiguity when it comes time to execute. Strategy Development Requires A Process And Cross-Team Effort The Forrester Customer-Obsessed Growth Strategy Model offers a structured decision-making process, emphasizing the importance of the company’s vision, mission, and purpose in grounding strategy. It requires a comprehensive assessment of business context and market conditions, aiming to foster a customer-centric perspective that drives strategic growth vectors. Developing a customer-obsessed growth strategy is a collaborative effort. It necessitates the collective expertise of marketing, sales, product, and IT leaders to bring diverse insights and ensure a unified approach toward execution. This team effort extends beyond senior leadership to include roles critical for bringing the strategy to life, such as revenue operations, portfolio marketing, product management, demand and customer marketing, and customer success. Strategic Clarity Drives Execution Efficiency And Success For B2B enterprises aiming for high growth, the path forward is clear: Develop a customer-obsessed growth strategy that aligns internal capabilities with external market demands. This not only ensures strategic clarity and execution efficiency but also positions companies to navigate the complexities of modern business dynamics successfully. By centering growth strategy around customer needs and fostering cross-functional alignment, businesses can unlock their growth potential. Forrester clients can read more in the report, Accelerate B2B Growth With A Customer-Obsessed Strategy, and schedule a guidance session to get started. source

The Secret To High Growth: Cocreate A Customer-Obsessed Strategy Read More »

OpenAI isn’t going anywhere: raises $6.6B at $157B valuation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Despite a wave of executive departures in recent months, OpenAI has today announced an expected new funding round. It was always expected to be a whopper, but the amount it raised — $6.6 billion at a $157 billion total company valuation  — now makes it the largest venture capital round in history to date, according to Axios. The round was led by Thrive Capital, according to Bloomberg, while CNBC notes that heavy hitters including Nvidia and Microsoft plowed more cash into this round as well. In announcing the funding on its website, OpenAI noted that ChatGPT alone counts more than 250 million weekly unique users. “The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems,” the company wrote in a short blog post. Reasons for skepticism? However, the news was still greeted with skepticism among AI critics including the outspoken tech public relations expert and tech writer Ed Zitron, who’s latest newsletter is headlined “OpenAI is a bad business” and argues that OpenAI’s decision to take a reported $500 million from the infamous Softbank Venture Fund — which has notably invested in duds like WeWork — combined with its reliance on individual ChatGPT subscriptions rather than API usage or licensing, suggests it is not well positioned to succeed as a for-profit in the future. These are, in my opinion, fair criticisms, as is noting the fact that Apple reportedly declined to invest in the firm after giving it consideration and potentially in the wake of former chief technology officer Mira Murati’s resignation just last week. And then there came the report from The Financial Times that OpenAI made part of the conditions of those who were throwing money its way that they not invest in rivals including Anthropic, which was founded by former OpenAI researchers and continues to pick up more exiting execs, and Musk’s xAI — recently reported to have switched on its Memphis training supercluster “Colossus” with 100,000+ Nvidia H100 GPUs — seemingly showing that OpenAI is worried about the competition catching up. Musk, for his part, took the news of OpenAI’s reported conditions on exclusive funding with his typical blunt criticism, calling the company evil on his X account. And indeed, the competition in the AI space is intensifying with more, newer models emerging such as Liquid AI’s new non-transformer based Liquid Foundation Models (LFMs), and Google and Anthropic also fielding compelling enterprise and consumer-facing options. Meanwhile, Meta and Alibaba are releasing powerful open source models for free. The OpenAI bull case Still, OpenAI’s models top the charts when it comes to the third-party performance benchmarks, and every time they have been overtaken, OpenAI has released an update or entire new class of models such as the o1 preview series that retakes the throne. So for now, fueled by $6.6 billion in fresh funding and with new models, developer tools, and aggressive cost cutting measures for developer customers (intelligence that is “too cheap to meter” in the words of many in the AI industry) — it appears that OpenAI is not going anywhere anytime soon. It may, in fact, be too big to fail, as I speculated it was becoming a few weeks ago. For developers building products atop the company’s AI models and frameworks, this is probably welcome news — as they are likely to stable and supported going forward. Will OpenAI give GPT creators any more $$$? However, one big question remains regarding OpenAI’s custom GPT Store, its version of a kind of AI app store which launched in January 2024 and allows any ChatGPT Plus user to create and share custom versions of ChatGPT designed to fulfill specific roles and perform specific tasks. OpenAI CEO and co-founder Sam Altman said at its developer conference DevDay in late 2023 that revenue sharing would be coming, and some users reported that they did receive some revenue from their GPTs, but we haven’t heard much from OpenAI about it since. Now flush with cash, I’m wondering if OpenAI will start paying out more to more GPT creators (selfishly as well, since I’ve created a few custom GPTs — full disclosure). I’ve reached out to the company to ask about that an will update when I hear back. Either way, OpenAI’s coffers have been refilled, and despite the chaos behind the scenes, the company continues to ship new AI products regularly — though we’re all still waiting on the public release of its AI video model Sora. source

OpenAI isn’t going anywhere: raises $6.6B at $157B valuation Read More »

Decentralized Digital Identity: The Global Acceptance Network Gains Momentum

Bhutan is the first country to join the Global Acceptance Network (GAN). GAN, in turn, is a foundation that aims to operate a nonprofit decentralized digital identity (DDID) network. GAN was founded, among others, by Accenture, cheqd, Interac, NTT Digital, Pearson, and other organizations. GAN will not develop technology standards on its own. The goals of GAN include creating global acceptance of DDID/verifiable credentials by: 1) building a trust framework and network; 2) contributing to open standards organizations’ DDID work (W3C, ISO) and; 3) most importantly, promoting and ensuring iteroparity between existing DDID ecosystems, such as the European Union’s eIDAS’s European Digital Identity Wallet initiative, the USA’s AAMVA mobile driver’s licenses, the Nordic BankID framework, and others. This diagram shows GAN’s high-level architecture:   Forrester expects that the primary vertical market use cases for GAN will include nonprofit and government, travel (e.g., digital passports, airline cooperation), education, and healthcare. GAN’s long-term success will depend on: National governments’ and regions’ willingness to work with a trust proxy. While governments have been issuing national digital identities and wallets, government DDID’s focus (understandably) has been domestic use cases (such as identification of citizens to government agencies), law enforcement, and utilities (verifying account holders to utility companies). Governments have not yet cared much about international interoperability of their digital driver’s licenses and national IDs with other countries’ verification systems. Working with a trust proxy (such as GAN) requires compromise — traditionally, not the forte of national government law and IT decision-makers. Reconciliation of privacy measures across GAN member ecosystems. GDPR has served as a de facto blueprint for many jurisdictions’ privacy laws, but the landscape is still rugged when it comes to harmonizing privacy laws and data protection across borders or even administrative regions (states, prefectures, etc.) of a single country. GAN needs to be able to take these differences into account and help with proxying not just trust but also data protection and privacy requirements between its member countries and organizations. Viable business models for all DDID initiatives. While the common good is a significant motivation in DDID and digital national ID issuance and acceptance, issuer and verifier organizations and ecosystems need to have financial motivation to join GAN. So far, the DDID business model is unclear and will need to get resolved to drive more industry adoption. GAN must serve clear, and initially simple, use cases. Forrester expects that GAN’s acceptance will also depend on how well it can immediately serve specific use cases such as travelers’ usage of digital passports and national IDs at border crossing, presentation of higher-education diplomas across borders, proving coverage of travelers’ health insurance during a visit to a foreign country, and others. source

Decentralized Digital Identity: The Global Acceptance Network Gains Momentum Read More »