The Future Of Banking: By 2030, Banking Will Be Invisible, Connected, Insights-Driven, And Purposeful

In the rapidly evolving financial landscape, banks are facing challenges that require them to adapt and innovate. But amidst all these changes, the importance of trust remains constant. Trust is the foundation upon which successful banking relationships are built, and it will continue to be a crucial factor in shaping the future of the industry. But consumer trust in banks has stalled globally, and factors such as poor customer experiences, privacy and data breaches, and the growth of AI in banking are further straining it. A decade ago, banks hoped that digital transformation would revolutionize the way they operate. Unfortunately, many banks failed to prioritize customer obsession and truly transform their business models. What remained are transaction-focused organizations that leave customers feeling that their banks don’t understand or care about their financial needs. This disconnect hampers trust, resulting in financial, competitive, and reputational impacts. To meet the needs of future consumers, banks will use technology to anticipate customer needs. They will deliver proactive, relevant services, connecting with partners and ecosystems to create better financial outcomes for their customers. By leveraging data and insights to create value, they will align their businesses with customer values in a purposeful way. Banks can stay relevant and build trust in a rapidly changing industry by adopting these characteristics: Invisible. Leading banks will use technology and far deeper customer insights to insert financial services at the customer’s moment of need, potentially at the expense of brand visibility. Connected. Technologies, partnerships, ecosystems, and platforms will combine across multiple industries, sharing data and resources to deliver financial outcomes. Insights-driven. Banks will create value from data and elevate custodianship of consumer trust. An expanded role around consent and identity will enable consumers to have finite control of their financial and digital lives and build a mutually beneficial relationship. Purposeful. Consumers will prefer banks that align with their values in a new, more purposeful age that sees local and cooperative principles aligning to matters of global responsibility. Leading banks are adjusting strategies, embracing innovation, rethinking business models, and deepening their understanding of their customers to chart their future paths. To learn more about what leading banks are doing and how to prepare your firm for the future of banking: Clients with additional questions can book an inquiry or guidance session here. source

The Future Of Banking: By 2030, Banking Will Be Invisible, Connected, Insights-Driven, And Purposeful Read More »

Clarifying Potential Paths to Market is Crucial to Maximize Network API Revenue

Historically, telcos were able to rely on providing voice and data connectivity to achieve revenue growth and consistent profit margins. However, since roughly the introduction of 3G cellular network technology and the first version of the Apple iPhone (circa 2007), third-party digital innovators have moved in to siphon off emerging monetization opportunities while curating vast developer ecosystems, relegating telcos to the connectivity provider role. Telcos have been struggling to take advantage of ever-faster networks, a growing diversity of devices, and massive popularity of social media and mobile video applications, while struggling to keep pace with ever more exacting network performance requirements, ever since. With this backdrop, the meshing of 5G networks and API exposure can empower telcos to reinsert themselves as a key connectivity platform within the digital landscape by unlocking the ability to more easily sell and scale customized, programmable connectivity underpinned by app developer platforms grounded in telecom network APIs. FIGURE 1: Network API Primary Segments Telecom Ecosystem Evolution: Network API Market Makers/Drivers and Likely Roles Telcos face several – non-exclusive – paths to market. As 5G service exposure invites a more vibrant telecommunications ecosystem, there are many stakeholders exploring how best to foster development of new API service bundle–fueled services that can generate new innovation and, by extension, new revenue from 5G service exposure. The core constituent groups are described in the sections that follow. Telcos: Telcos already have the ability to expose network capabilities via an API gateway enabled by the Service Capability Exposure Function (SCEF) in 4G/LTE networks; or the Network Exposure Function (NEF) in 5G networks. Telcos also provide the underlying connectivity, which could be delivered via custom network slices, guided by API and policy definitions to align with developer needs. Developing services – utilizing CAMARA specifications and/or non-standardized APIs – alongside their existing connectivity business models will bring telcos more in line with cloud and edge service providers that focus on enabling third parties to build services on top of their infrastructure. This can lead to a deeper monetization of network infrastructure and increase network accessibility and commercial engagement with application developers. Network Infrastructure Vendors: These vendors provide the underlying infrastructure (e.g., hardware and software) to enable the programmable 5G service. Further, vendors could conceivably end up helping build the service APIs as bundles and offer them as standalone or white-label solutions to comms SPs or platform providers alike. Vendors also stand to benefit from a robust 5G API ecosystem that can contribute to both increasing infrastructure sales required to deliver advanced connectivity services and offering them a new revenue stream. Nokia, Ericsson, and Oracle represent some of the vendors highlighting early activities in this area. CPaaS Platforms/API Aggregation: Communications Platform as a Service (CPaaS) providers such as Vonage and Infobip players provide a known way to aggregate and consume APIs for a range of communications services, including customer engagement through multiple channels and two-factor authentication. CPaaS and API aggregators are a natural channel partner for network APIs, broadening developer market access to these services. Hyperscalers: Hyperscale cloud providers (HCPs) provide a potential path for integrating network APIs via an API gateway and to integrate network performance capabilities enabled through these network APIs, along with cloud computing and storage, in order to build high-value applications in support of a number of vertical markets and use cases. HCPs all support enormous bases of cloud developers that are well-versed in API consumption and lifecycle management. HCPs are actively participating in industry initiatives such as CAMARA and the GSMA Open Gateway Alliance, and represent a significant potential opportunity. Independent Software Vendors or Edge Platform Providers: Independent software vendors (ISVs) can design and bundle APIs for SaaS offerings to organizations, simplifying API consumption for organizations that lack the ability to embed APIs themselves. In addition, IDC observes an emerging subset of the app platform market that focuses on enabling edge applications (e.g., IoT edge apps) that are hosted and run across edge sites. Specific platforms may focus on discrete vertical opportunities to specialize. ISVs are able to specialize in respective verticals and use cases (e.g., industrial automation, healthcare, and entertainment), providing a logical route to drive network API adoption among enterprise and industrial adopters that would be most comfortable consuming new software offerings. Education and Training are Keys to Growth While the potential opportunity for network APIs is potentially limitless, the key to their success lies largely in the ability for network API proponents to articulate their value in these various contexts. In particularly, the largest opportunity may be in educating the developer community on what value network APIs can bring in augmenting enterprise and consumer-facing applications, what combination of network APIs can be brought to bear simultaneously to address various requirements pertaining to Quality on Demand (QoD), edge, security, location, and a number of other network capabilities enabled by APIs. IDC believes that industry groups such as CAMARA, Open Gateway Alliance, and TM Forum will need to devote as much of an effort to educating (and potentially certifying) app developers in network API capabilities and best practices, as it is currently devoting to establishing and proving out their technical capabilities. For a deeper dive into these topics, watch IDC’s July 10th webinar, ‘Revenue Enablers for the Future Telco: APIs, AI, and Emerging Tech”. source

Clarifying Potential Paths to Market is Crucial to Maximize Network API Revenue Read More »

GenAI: The Time Is Now For Health Insurers To Embrace Innovation And Change Management

Health insurance organizations expect generative AI (genAI) to transform healthcare. They’re experimenting to unearth its potential to both automate back-office tasks and to tackle complex processes to elevate the employee and member experience. But both health insurers (HIs) and healthcare provider organizations are starting this journey with poorly performing, outdated IT systems, which Black Book estimates costs the industry $8 billion annually. GenAI-powered transformation can only happen if HIs invest in sturdy, agile technical infrastructure. Just as importantly, they also must develop internal capabilities and readiness for strategy and change management to gain genAI’s true potential in their workflows. My new report, Generative AI: What It Means For Health Insurers, dives into genAI’s impact on HIs and examines avenues for early adoption and ways to circumvent potential risks. Some early benefits of genAI applications for HIs are: Expediting care coordination through digitizing the intake process. Automating clinical reviews for prior authorization and care management rapidly increases care coordination and prevents delays in care. Gaining contextual insights from customer interactions. By analyzing customer interaction data during typical activities (e.g., searching benefits) and high-value moments (e.g., renewal), HIs can better comprehend how members perceive their products, services, and overall experience. Enabling knowledge management via conversational experience. Employee productivity tools powered by large language models (LLMs) in contact centers and internal chatbots help customer service agents resolve customer issues efficiently. Enhancing the member experience. HIs can use healthcare-specific LLMs to automate and improve experiences across the member lifecycle. How Can Healthcare Leaders Stay Up To Date On Generative AI? GenAI applications are evolving rapidly, and keeping a pulse on who, when, where, why, and how is challenging. HIs should align on values, principles, and goals at an enterprise level — not a functional level. Senior leaders must decide whether they want to reimagine every function within their organization or pursue a more measured approach, such as augmenting their staff’s abilities or consumer experiences in small pilots. How Can Healthcare Organizations Get Involved In Upcoming Forrester GenAI Research? My research on generative AI in healthcare will continue as healthcare organizations navigate core genAI issues, new security implications, and the impact on and opportunities for employees. Also, see Forrester’s dedicated genAI theme page for more insights and guidance. If you would like to participate in our research, please contact me ([email protected]) to schedule a research interview. And if you’re a Forrester client, let’s talk via a guidance session or inquiry to explore how HIs are tapping into genAI. source

GenAI: The Time Is Now For Health Insurers To Embrace Innovation And Change Management Read More »

Empowering Sales Management with AI

In today’s high-stakes sales environment, managers are grappling with an array of challenges that can stifle growth and efficiency. From the daunting task of managing diverse teams and complex sales processes to the relentless pressure of meeting ambitious targets, the role of a sales manager has never been more demanding. Add to this the reality of having to do more with less—facing static staffing budgets amidst increasing operational complexities—and it’s clear that the traditional approaches to sales management are no longer sufficient. Enter Artificial Intelligence (AI). This transformative technology is not just a buzzword, but a practical solution poised to revolutionize sales management. AI’s ability to automate administrative tasks, provide personalized training, and deliver data-driven insights offers a beacon of hope for overwhelmed sales managers. By harnessing AI, sales leaders can not only navigate the challenges of their roles more effectively but also unlock new levels of productivity and strategic decision-making. This introduction to AI in sales management marks the beginning of a new era, where efficiency and growth go hand in hand, empowering managers to lead their teams to unprecedented success. The Challenges Sales Managers Face In today’s high-pressure sales environments, sales managers are grappling with a myriad of challenges that test their limits daily. The transition from top-performing salesperson to a managerial role often comes with the assumption that success in sales equates to success in leadership. However, the reality is far more complex. Sales managers find themselves overwhelmed by the immense workload, which includes not just leading and motivating their teams but also handling administrative duties and striving to meet ambitious sales targets. The scarcity of resources, be it time, budget, or staffing, further exacerbates the pressure on sales managers. They are also tasked with navigating the intricate sales processes and managing a deluge of data from various sources without adequate analytical tools. The diversity within teams, in terms of skill sets, personalities, and working styles, adds another layer of complexity to ensuring cohesion and productivity. Continuous learning and development for both the managers and their teams are essential to maintain consistency and adherence to sales methodologies, all while under relentless pressure to achieve organizational goals. Despite these challenges, organizations often expect sales managers to do more with less. With staffing budgets remaining stagnant and the tools and processes involved in B2B selling becoming increasingly complex, sales managers are often set up for failure from the start. The high turnover among sales representatives and the significant costs associated with hiring and training new talent only add to the burden, making the role of sales managers one of the most challenging in the business landscape today. Revolutionizing Sales Management with AI In today’s dynamic sales environment, AI and Machine Learning (ML) are essential tools that are reshaping the way sales management operates. By offering personalized training, automating administrative tasks, and providing data-driven insights, AI is setting a new standard for efficiency and growth in sales management. Personalized Training and Coaching Gone are the days of one-size-fits-all training programs. AI enables a more personalized approach to training, catering to the unique needs and learning styles of each sales representative. By analyzing sales interactions, AI identifies areas for improvement and tailors training content, ensuring that each member of the sales team receives the most relevant and effective coaching. Administrative Automation: A Time Saver AI shines in automating routine tasks that consume a significant portion of sales managers’ and representatives’ time. From generating personalized emails to logging customer interactions and scheduling meetings, AI tools streamline these processes, freeing up time for more strategic activities. This shift not only enhances productivity but also allows sales managers to focus on coaching and strategic planning. Harnessing Data-Driven Insights In the realm of sales management, data is king. However, the sheer volume of data can be overwhelming. AI algorithms excel in sifting through vast datasets, providing real-time performance metrics, identifying bottlenecks, and offering accurate forecasting. These insights empower sales managers to make informed decisions that drive better results for their teams and organizations. AI is not just transforming sales management; it’s revolutionizing it. By providing personalized training, automating administrative tasks, and delivering data-driven insights, AI is enabling sales teams to achieve unprecedented levels of efficiency and growth. As we embrace these technologies, the future of sales management looks brighter than ever. “In the fast-paced world of sales, managers are often overwhelmed by the sheer volume of data and tasks. AI offers a lifeline, helping them navigate the complexity with precision and efficiency, turning chaos into opportunity.” Navigating the AI Implementation Journey in Sales Management Integrating AI into sales operations isn’t just about deploying new technology; it’s about aligning it with your organizational culture, securing leadership buy-in, and ensuring your data is primed for action. Here’s how to make AI work for your sales team: Organizational Culture: The Foundation of AI Adoption Your company’s culture is the bedrock of successful AI integration. A culture that values innovation and is open to change will embrace AI’s potential to transform sales management. Conversely, a culture resistant to change may see AI as a threat rather than an opportunity. Cultivating an environment that encourages experimentation and learning is key to leveraging AI effectively. Leadership Buy-In: Steering the Ship Without the support of leadership, AI initiatives are likely to flounder. Leaders must not only endorse AI projects but also actively participate in their implementation. This involves allocating resources, setting clear objectives, and demonstrating a commitment to leveraging AI as a strategic tool for sales management success. Data Readiness: The Fuel for AI The adage “garbage in, garbage out” holds particularly true for AI in sales. The quality, completeness, and accessibility of your CRM data are critical. Before embarking on your AI journey, assess your data infrastructure to ensure it can support AI analysis. This step is crucial for avoiding pitfalls and setting the stage for meaningful AI-driven insights. By focusing on these key areas, organizations can navigate the complexities of AI implementation in sales management, transforming challenges into opportunities

Empowering Sales Management with AI Read More »

3. How Americans feel about election coverage

There is no consensus among Americans about how easy it is to find reliable information about the presidential election. About four-in-ten U.S. adults (39%) say it has been very or somewhat easy to find reliable information about the 2024 presidential election, somewhat larger than the share who have found it very or somewhat difficult (28%). An additional 32% say it has been neither easy nor difficult. By party and ideology Democrats are much more likely than Republicans to say finding reliable information has been easy, while Republicans are more inclined to say it’s been difficult. Around half of Democrats and independents who lean Democratic (52%) say it’s been very or somewhat easy to find reliable information about the 2024 election, compared with 29% of Republicans and Republican leaners who say the same. On the other hand, Republicans are about twice as likely as Democrats to say it’s been at least somewhat difficult to find reliable election information (39% vs. 18%). In both parties, views differ by ideology: Conservative Republicans are slightly more likely than Republicans who describe themselves as moderate or liberal to say it’s been difficult to find reliable information (42% vs. 35%). Liberal Democrats are more likely than conservative or moderate Democrats to say that finding reliable information has been easy (62% vs. 44%). Broad assessments of election coverage A majority of Americans (58%) think the news media have covered the 2024 election well, including 13% who think they have covered it very well. On the other hand, 41% say the news media have done not too (26%) or not at all (15%) well covering the presidential race. Americans’ views on campaign media coverage were almost identical at the same point in the 2020 election cycle. By party As in 2020, Republicans are much more critical of election coverage than Democrats. Six-in-ten Republicans say the news media have not covered the 2024 presidential campaign well, compared with just 22% of Democrats who hold this view. And among Republicans, conservatives (69%) are much more likely than those who identify as moderate or liberal (47%) to think the news media are not doing a good job covering the 2024 election. Within each party, responses differ by age group. Among Republicans, those under 30 are more likely than older adults to say that the media are doing at least somewhat well: 51% say this, versus 42% of those ages 30 to 49 and about a third of those ages 50 and older. Among Democrats, the opposite is true: Adults under 30 are less likely than their elders to say the news media are covering the election well, though a 69% majority still say this. Americans’ views of news sources they turn to most for election news Americans are much more positive in their assessments of the sources they turn to most often for news about the presidential election than they are about the news media as a whole. Around eight-in-ten U.S. adults (81%) say the news sources they turn to most often have covered the 2024 election very (27%) or somewhat (54%) well. Far fewer say their go-to sources have covered the presidential election not too (15%) or not at all (3%) well. Americans held similar views about 2020 election coverage by their most common news sources. By party Even when it comes to the news sources they use most often, Republicans are twice as likely as Democrats to say these sources have not covered the 2024 election well (22% vs. 11%). But Republicans see their own main sources of election news in a much more positive light than the news media in general. The vast majority of both Republicans (77%) and Democrats (87%) say their most-used news sources have covered this election cycle at least somewhat well. Election news fatigue A majority of Americans (59%) say they are worn out by so much coverage of the 2024 presidential election. This figure has been roughly consistent since we first asked this question in 2016. Meanwhile, about four-in-ten say they like seeing a lot of coverage of the campaign and candidates. Similar to when this question was asked in the spring, those who are following the election more closely are more likely to say that they like seeing a lot of coverage of the campaign and candidates. Republicans and Democrats agree on this: 59% of Americans in each party say they feel worn out by so much coverage of the campaign and candidates. This is a change from April, before President Joe Biden withdrew from the race. At that time, Democrats were slightly more likely than Republicans to say they felt worn out by so much election coverage (66% vs. 58%). source

3. How Americans feel about election coverage Read More »

Employer Brand And EVP: A Necessary Primer

Traditionally, HR or employee experience (EX) leaders cared about employer value propositions (EVPs) and CMOs cared about employer brand. But EVP and employer brand are tightly linked, as their definitions suggest: Employer brand is the sum of what people outside your organization think about it as a place to work. EVP is the sum of what people inside your organization think about it as a place to work, based on the unique benefits and opportunities your organization offers against the cost and effort required to succeed there. Who influences external and internal perceptions of the organization? The employees! So, even when HR leaders and CMOs are highly collaborative, if employees don’t have a defined role in the equation, they can experience a different reality than the one HR and Marketing are trying to build. In short, employer brand and EVP is everyone’s job. In our new report, What Your Company Means To Your Workforce Matters For Your Talent Strategy’s Success, we see three behaviors emerge in organizations that have synchronized everyone’s role in employer brand and EVP: Align the two through research insights. Utilize employee listening efforts, social media monitoring, and competitive analysis — and align on the resulting insights — for evolving the EVP and employer branding. Identify and address dissonance between EVP and lived experience. Through the research effort, identify where there is dissonance between what your employer brand says and what your employees experience. Lead with employee voices. Integrate employee perspectives into employer branding efforts to enhance the authenticity and appeal of the employer brand, so it resonates with both current and prospective talent. Whether you’re an HR/EX leader, CMO, or hiring manager who wants to recruit in-demand talent, it’s important to understand the dynamics between EVP and employer brand. Reach out if you’d like to schedule a guidance session on these topics, or their connection to relevant topics ranging from culture to digital talent recruitment.           source

Employer Brand And EVP: A Necessary Primer Read More »

DeepMind’s Michelangelo benchmark reveals limitations of long-context LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Large language models (LLMs) with very long context windows have been making headlines lately. The ability to cram hundreds of thousands or even millions of tokens into a single prompt unlocks many possibilities for developers.  But how well do these long-context LLMs really understand and utilize the vast amounts of information they receive? Researchers at Google DeepMind have introduced Michelangelo, a new benchmark designed to evaluate the long-context reasoning capabilities of LLMs. Their findings, published in a new research paper, show that while current frontier models have progressed in retrieving information from large in-context data, they still struggle with tasks that require reasoning over the data structure. The need for better long-context benchmarks The emergence of LLMs with extremely long context windows, ranging from 128,000 to over 1 million tokens, has prompted researchers to develop new benchmarks to evaluate their capabilities. However, most of the focus has been on retrieval tasks, such as the popular “needle-in-a-haystack” evaluation, where the model is tasked with finding a specific piece of information within a large context. “Over time, models have grown considerably more capable in long context performance,” Kiran Vodrahalli, research scientist at Google DeepMind, told VentureBeat. “For instance, the popular needle-in-a-haystack evaluation for retrieval has now been well saturated up to extremely long context lengths. Thus, it has become important to determine whether the harder tasks models are capable of solving in short context regimes are also solvable at long ranges.” Retrieval tasks don’t necessarily reflect a model’s capacity for reasoning over the entire context. A model might be able to find a specific fact without understanding the relationships between different parts of the text. Meanwhile, existing benchmarks that evaluate a model’s ability to reason over long contexts have limitations. “It is easy to develop long reasoning evaluations which are solvable with a combination of only using retrieval and information stored in model weights, thus ‘short-circuiting’ the test of the model’s ability to use the long-context,” Vodrahalli said. Michelangelo To address the limitations of current benchmarks, the researchers introduced Michelangelo, a “minimal, synthetic, and unleaked long-context reasoning evaluation for large language models.”  Michelangelo is based on the analogy of a sculptor chiseling away irrelevant pieces of marble to reveal the underlying structure. The benchmark focuses on evaluating the model’s ability to understand the relationships and structure of the information within its context window, rather than simply retrieving isolated facts. The benchmark consists of three core tasks: Latent list: The model must process a long sequence of operations performed on a Python list, filter out irrelevant or redundant statements, and determine the final state of the list. “Latent List measures the ability of a model to track a latent data structure’s properties over the course of a stream of code instructions,” the researchers write. Multi-round co-reference resolution (MRCR): The model must produce parts of a long conversation between a user and an LLM. This requires the model to understand the structure of the conversation and resolve references to previous turns, even when the conversation contains confusing or distracting elements. “MRCR measures the model’s ability to understanding ordering in natural text, to distinguish between similar drafts of writing, and to reproduce a specified piece of previous context subject to adversarially difficult queries,” the researchers write. “I don’t know” (IDK): The model is given a long story and asked to answer multiple-choice questions about it. For some questions, the context does not contain the answer, and the model must be able to recognize the limits of its knowledge and respond with “I don’t know.” “IDK measures the model’s ability to understand whether it knows what it doesn’t know based on the presented context,” the researchers write. Latent Structure Queries The tasks in Michelangelo are based on a novel framework called Latent Structure Queries (LSQ). LSQ provides a general approach for designing long-context reasoning evaluations that can be extended to arbitrary lengths. It can also test the model’s understanding of implicit information as opposed to retrieving simple facts. LSQ relies on synthesizing test data to avoid the pitfalls of test data leaking into the training corpus. “By requiring the model to extract information from structures rather than values from keys (sculptures from marble rather than needles from haystacks), we can more deeply test language model context understanding beyond retrieval,” the researchers write. LSQ has three key differences from other approaches to evaluating long-context LLMs. First, it has been explicitly designed to avoid short-circuiting flaws in evaluations that go beyond retrieval tasks. Second, it specifies a methodology for increasing task complexity and context length independently. And finally, it is general enough to capture a large range of reasoning tasks. The three tests used in Michelangelo cover code interpretation and reasoning over loosely written text. “The goal is that long-context beyond-reasoning evaluations implemented by following LSQ will lead to fewer scenarios where a proposed evaluation reduces to solving a retrieval task,” Vodrahalli said. Evaluating frontier models on Michelangelo The researchers evaluated ten frontier LLMs on Michelangelo, including different variants of Gemini, GPT-4 and 4o, and Claude. They tested the models on contexts up to 1 million tokens. Gemini models performed best on MRCR, GPT models excelled on Latent List, and Claude 3.5 Sonnet achieved the highest scores on IDK. However, all models exhibited a significant drop in performance as the complexity of the reasoning tasks increased, suggesting that even with very long context windows, current LLMs still have room to improve in their ability to reason over large amounts of information. Frontier LLMs struggle with reasoning on long-context windows (source: arxiv) “Frontier models have room to improve on all of the beyond-retrieval reasoning primitives (Latent List, MRCR, IDK) that we investigate in Michelangelo,” Vodrahalli said. “Different frontier models have different strengths and weaknesses – each class performs well on different context ranges and on different tasks. What does seem to be universal across models is the initial drop

DeepMind’s Michelangelo benchmark reveals limitations of long-context LLMs Read More »

Samsung planning software update to address 'app throttling' issue

Image: Jason Cipriani Samsung said on Friday it will commence a software update “as soon as possible” to address consumer complaints about a preinstalled app limiting the performance of Galaxy S22 smartphones. The issue stems from the Game Optimising Service (GOS) app on the phones, which automatically limits the performance of devices when it detects a gaming app is in operation. The South Korean tech giant said it plans to add an option in its game launcher app to allow users to prioritise performance through the software update. More details on how this option will work are expected to be announced later. Samsung previously explained that the GOS app was put on devices to prevent them from overheating and losing battery too quickly during gaming for consumer safety. Beyond limiting gaming performance, there have also been unverified posts on social media and South Korean community forums that the app has affected the performance of non-gaming apps. Samsung has denied these claims, saying that the GOS app only affects gaming apps. The GOS app itself is not a new feature to the Galaxy S22 series and has been on previous generations of Galaxy smartphones. For those older devices, however, gamers had workarounds for the feature but these have been purportedly blocked with Samsung’s recent One UI 4.1 update. Since sales began for the Galaxy S22 series, numerous complaints have been posted across Samsung’s community forums for South Korean consumers and user community pages. RELATED COVERAGE source

Samsung planning software update to address 'app throttling' issue Read More »

Slash The Hidden Costs Of Your Customer Surveys

Nearly all customer experience (CX) measurement and voice-of-the-customer (VoC) efforts use customer surveys. But your surveys are costlier than you think. Obvious costs include the budget for a tech vendor you use to send and analyze surveys or incentives for customers. Hidden costs are more problematic because we don’t consider them enough. They arise when surveys: Squander customers’ attention, time, and goodwill. Deplete stakeholders’ time and ability to make customer-focused good decisions. Waste your own time on reporting data that people don’t act on. In this blog, I’ll focus on the first issue. If you prefer to listen rather than read, check out our CX Cast Episode, “Feedback Is A Touchpoint, Too.” Surveys Squander Customers’ Attention, Time, And Goodwill Consider these three major problems with surveys as they are today: Surveys Consume Customer Attention Your business can only survive if customers read, consider, and respond to your marketing emails, offers, campaigns, information, etc. You also need customers to take part in research so you can understand their future needs. Using some of that limited attention on a survey is absolutely worth it if the survey is good and brings you valuable data. But are most surveys? No. Growing efforts to collect zero-party data to feed firms’ personalization efforts and the wider martech stack will make this even worse: More firms will reach out to customers, asking them about their preferences and wishes. Surveys Undermine Customer Relationships You risk seeming like you don’t know customers and don’t care about them. My bank asked me in a CX survey which credit card I own and how often I use it. The credit card provider knows both of those things — maybe the CX team cannot connect the data, but asking me these questions undermines my trust in my bank. Surveys Add A Negative Touchpoint To Customer Journeys In addition to the problem of making customers feel unseen, firms optimize surveys for easy analysis and for which questions various departments want to ask. As a result, they usually are a longish interrogation that doesn’t flow well and includes selfish questions or questions that customers don’t care about. And in many current surveys, the design still resembles a web form from the 2000s. If you have read your Kahneman (and I know many of you have), you will also realize that the survey touchpoint comes toward the end of the broader customer experience that the survey is about. So a bad survey is doubly problematic because the peak end rule tells us the end of an experience matters a lot to how customers remember the experience. If they like the branch visit but hated the survey, that will worsen memories of the overall experience! We need to follow six principles, all under the motto of “design feedback collection as a touchpoint,” if we want to strengthen relationships, be able to capture customer attention, and create good experiences rather than bad ones. 1. Rethink Surveys As Conversations Surveys should be designed to mimic natural, engaging conversations rather than interrogations. This approach involves creating a flow where questions are logically ordered and relevant to the customer’s experience. If you have conversational design experts at your company, get their recommendations on how to make the survey feel more like an engaging dialogue. If you do nothing else, read the survey aloud to someone who matters to you (your boss, wife, first date). This simple exercise can reveal issues with wording and flow that may not be apparent on paper. If the survey is embarrassing or feels tedious to you, it’s likely your customers will feel the same. Don’t expose your customers to it. 2. Don’t Just Say You Value Customers’ Feedback — Prove It If customers gift you their time to give feedback, you are now responsible! You must make sure to give back. Customers want to know their input is valued and acted upon. Share examples of tangible changes made based on previous feedback. You can do that in one-to-many conversations or even in your next survey invite, as you see in this example. This not only encourages participation but also enhances the customer experience by reinforcing their importance in shaping the brand’s direction. As discussed, organizations often have internal pressures to include numerous questions in a survey, which can overwhelm customers. Highlight the opportunity cost of using customers’ time for unnecessary questions to streamline surveys and respect customers’ input and time. 3. Pre-Test To Avoid Confusion And Ambiguity Pre-test the survey with real customers or employees outside the project team. Many organizations think of A/B tests, and while those are important, you need to do more. You also cannot just ask respondents if they understand the questions. Instead, ask them to restate the questions in their own words. This practice helps uncover potential misunderstandings and ensures clarity. For example, when asking a question like, “Was our communication good?” respondents restating it tells you if they interpret this as the effectiveness of language, the overall communication process, or something else. Only if you identify these variances early on can you avoid confusion and gather more accurate and useful data. 4. Match Survey Content And Timing I was recently invited to a radio interview on surveys by Marketplace, a US public radio broadcast covering business and the economy. The host Dan told the story of how he bought tomato seeds at Home Depot and got a survey about the purchase before he was even able to sow them, much less eat the fruits of his labor. Check out Dan’s interview with Fred Reichheld, two other experts, and me. You can still send a survey right away, but limit yourself to things the customer can judge — like how easy it was to buy the tomatoes. And focus on things you want to change in that moment. In addition, collect feedback after the customer achieves the goal of their journey. Only then will you get customers to reflect. These insights form the customers’ “remembering self” which influences

Slash The Hidden Costs Of Your Customer Surveys Read More »