Retailers, Reward Loyalty With Value-Based Personalization This Holiday Season

The holiday season is the time to show those closest to you that you care. Holiday shoppers select gifts to show their loved ones appreciation and affection, and companies must do the same for their loyal customers. US shoppers want offers tailored to them … Fifty-four percent of US online adults say that receiving offers that are tailored to their preferences or available only to them is a key reason they join loyalty programs, per Forrester’s Retail Topic Insights Survey, 2023. When used effectively, personalization can deliver value to loyal customers. To do so, companies should leverage loyal customer insights to tailor moments, such as reward redemption and surprise and delight offers, to customers’ unique shopping habits and preferences. … but to succeed, ground your personalization tactics in customer values A personalization tactic is only as good as the value a customer gets from it. But consumers still have mixed feelings about personalized interactions from companies. This holiday season, use personalization in your loyalty program to allow customers to realize value across four dimensions (see Figure 1 below): Economic value. Personalized offers and promotions help customers save money through the holiday season heavy spending blitz. For example, retailers might offer their most loyal customer segments free expedited shipping so that program members in a pinch can get gifts to their loved ones before holiday celebrations. Functional value. To reduce decision fatigue, implement product and service recommendation tools to help consumers make an easier choice. For example, you could offer personalized shipping and in-store pick-up options to get customers their purchases quickly and encourage them to redeem rewards by granting early access to those products with program points. Experiential value. Seventy-four percent of US online adults who belong to customer loyalty programs say they are more likely to participate in a loyalty program if brands make it easy to use. If shoppers go to brick-and-mortar stores, ensure that in-store customers (and the store associates helping them) can access their personalized program benefits or rewards while checking out through point-of-sale systems or on their mobile phone. Symbolic value. Use personalized moments to show customers that you understand and appreciate them. For example, a retailer may surprise and delight customers with personalized thank-you notes for shopping with them, an acknowledgement of how long they’ve been a loyal customer, and/or offers for future purchases in the new year. Figure 1: Use Program Personalization To Deliver Four Types Of Customer Value Be sure to read our new report, Use Personalization To Activate Loyalty Program Value, for more specifics on how to get personalization in loyalty programs right this holiday season. Forrester clients should feel free to schedule a guidance session or inquiry with us to continue this conversation! And stay tuned for our upcoming “The State Of US Consumer Personalization, 2024” report, publishing in November, for more insights on how consumers really feel about personalization. Happy holiday planning! source

Retailers, Reward Loyalty With Value-Based Personalization This Holiday Season Read More »

How to Keep IT Up and Running During a Disaster

The United States experienced 28 disasters, including storms, flooding, tornadoes and a wildfire, that cost more than a billion dollars each in 2023, according to the National Oceanic and Atmospheric Administration (NOAA). And those were only the most expensive, weather-related events in one country. Around the world, natural disasters, including non-weather-related phenomena such as earthquakes and tsunamis, wreak havoc on human life and on infrastructure — including the IT that keeps life in the digital age running smoothly.   While the devastation caused by massive events understandably captures headlines, even relatively minor natural disasters such as large storms can affect IT operations. A 2024 report found that 52% of data center outages were the result of power failures. In the last decade, 83% of major power outages were weather-related. Even relatively minor storms can take out power lines.  Fourteen percent of respondents surveyed for InformationWeek’s 2024 Cyber Resilience Strategy Report said that their network accessibility had been disrupted by severe weather or a natural disaster. Sixteen percent ranked natural disasters as the single most significant event they had experienced.  Some businesses affected by natural disasters don’t survive in the first place: according to the Federal Emergency Management Agency, 43% of businesses never reopen and almost a third go out of business within two years. Loss of IT accessibility for nine days or more typically results in bankruptcy within one year.   Related:Why Businesses Need to Update Their DR Plan Now Only 23% of respondents to a survey on the effects of Hurricane Sandy in 2012 were prepared for the storm. Despite the increasing prevalence of weather-related events because of climate change, the US Chamber of Commerce Foundation found that only 26% of small businesses have a disaster plan in place as of this year, suggesting that few have planned for how their IT will be impacted.  Here, InformationWeek investigates strategies for keeping IT operational when disaster inevitably strikes, with insights from data center operator DataBank’s senior director of sustainability, Jenny Gerson, and industrial software company IFS’s chief technology officer for North America, Kevin Miller.    Preventing Damage to Infrastructure  Depending on the location of an IT facility and the natural disasters common to the region, any number of steps may need to be taken to prevent damage to essential physical IT components.   “We take into account all kinds of natural disasters when we’re looking at where to site a data center — we try to site it in the safest place we can,” Gerson says.  Related:Dust Bunnies on the Attack: Datacenter Maintenance Issues Pose Security Threats Jenny Gerson, DataBank In earthquake-prone regions, buildings need to be able to withstand temblors — additional reinforcements may be needed to prevent servers and wiring from being disrupted. Operators in areas prone to severe storms and hurricanes may need to both stormproof their buildings and ensure that essential equipment is located above ground level or in waterproof enclosures to avoid potential flood damage. Flood barriers may be advisable in some areas. Attention to potential mold damage after flooding may be necessary, as mold may create dangerous conditions for employees. And fire suppression systems may be able to mitigate damage before equipment is completely destroyed.  Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance.  But, Miller notes, IoT devices can sometimes fail. “We’re seeing customers who are starting to rely more on options like Starlink,” he says. “There’s no physical infrastructure other than a mini satellite dish that’s providing that connectivity — but [it offers the] ability for them to get data, feed it back, analyze it, and then make predictive assessments on what they should be doing.”  Related:Revisiting Disaster Recovery and Business Continuity Onsite generators, including sustainable onsite power plants using solar or wind, and microgrids can keep operations running even if access to the main grid is cut off. And redundancy in cooling is crucial for data centers as well.  “Should the utility go down, we have a seamless way to get to our generator backup so there are no blips in power,” Gerson claims. “We always have backup cooling systems.”  Creating Backups  Geodiversity can make or break IT operations during a natural disaster. While steps can be taken to protect operations, they may not always be sufficient to prevent interruption. If a data center or other IT operation is taken offline, the ability to switch over to a location in an unaffected area or to more dispersed, cloud-based operations, can be relatively seamless if proper planning is in place.  This type of redundancy requires careful implementation of regular backups — cloud technology makes this relatively efficient but hard backups may be useful as well. Setting shorter recovery point objectives, while potentially more expensive in the short-term, will likely make it easier to get things back up and running if an operation is taken offline by a disaster.  IoT devices may be helpful in recovering data that is not fully backed up. Many of these devices store data on their own before transmitting portions of it to the servers to which they are connected. In the case of a disaster, that stored information may be helpful in data restoration processes.  Regulatory Compliance  In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster.   “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.”  Being aware of deadlines for compliance reporting and being in contact with

How to Keep IT Up and Running During a Disaster Read More »

Why multi-agent AI tackles complexities LLMs can’t

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The introduction of ChatGPT has brought large language models (LLMs) into widespread use across both tech and non-tech industries. This popularity is primarily due to two factors: LLMs as a knowledge storehouse: LLMs are trained on a vast amount of internet data and are updated at regular intervals (that is, GPT-3, GPT-3.5, GPT-4, GPT-4o, and others);  Emergent abilities: As LLMs grow, they display abilities not found in smaller models. Does this mean we have already reached human-level intelligence, which we call artificial general intelligence (AGI)? Gartner defines AGI as a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains. The road to AGI is long, with one key hurdle being the auto-regressive nature of LLM training that predicts words based on past sequences. As one of the pioneers in AI research, Yann LeCun points out that LLMs can drift away from accurate responses due to their auto-regressive nature. Consequently, LLMs have several limitations: Limited knowledge: While trained on vast data, LLMs lack up-to-date world knowledge. Limited reasoning: LLMs have limited reasoning capability. As Subbarao Kambhampati points out LLMs are good knowledge retrievers but not good reasoners. No Dynamicity: LLMs are static and unable to access real-time information. To overcome LLM’s challenges, a more advanced approach is required. This is where agents become crucial. Agents to the rescue The concept of intelligent agent in AI has evolved over two decades, with implementations changing over time. Today, agents are discussed in the context of LLMs. Simply put, an agent is like a Swiss Army knife for LLM challenges: It can help us in reasoning, provide means to get up-to-date information from the Internet (solving dynamicity issues with LLM) and can achieve a task autonomously. With LLM as its backbone, an agent formally comprises tools, memory, reasoning (or planning) and action components. Components of an agent (Image Credit: Lilian Weng) Components of AI agents Tools enable agents to access external information — whether from the internet, databases, or APIs — allowing them to gather necessary data. Memory can be short or long-term. Agents use scratchpad memory to temporarily hold results from various sources, while chat history is an example of long-term memory. The Reasoner allows agents to think methodically, breaking complex tasks into manageable subtasks for effective processing. Actions: Agents perform actions based on their environment and reasoning, adapting and solving tasks iteratively through feedback. ReAct is one of the common methods for iteratively performing reasoning and action. What are agents good at? Agents excel at complex tasks, especially when in a role-playing mode, leveraging the enhanced performance of LLMs. For instance, when writing a blog, one agent may focus on research while another handles writing — each tackling a specific sub-goal. This multi-agent approach applies to numerous real-life problems. Role-playing helps agents stay focused on specific tasks to achieve larger objectives, reducing hallucinations by clearly defining parts of a prompt — such as role, instruction and context. Since LLM performance depends on well-structured prompts, various frameworks formalize this process. One such framework, CrewAI, provides a structured approach to defining role-playing, as we’ll discuss next. Multi agents vs single agent Take the example of retrieval augmented generation (RAG) using a single agent. It’s an effective way to empower LLMs to handle domain-specific queries by leveraging information from indexed documents. However, single-agent RAG comes with its own limitations, such as retrieval performance or document ranking. Multi-agent RAG overcomes these limitations by employing specialized agents for document understanding, retrieval and ranking. In a multi-agent scenario, agents collaborate in different ways, similar to distributed computing patterns: sequential, centralized, decentralized or shared message pools. Frameworks like CrewAI, Autogen, and langGraph+langChain enable complex problem-solving with multi-agent approaches. In this article, I have used CrewAI as the reference framework to explore autonomous workflow management. Workflow management: A use case for multi-agent systems Most industrial processes are about managing workflows, be it loan processing, marketing campaign management or even DevOps. Steps, either sequential or cyclic, are required to achieve a particular goal. In a traditional approach, each step (say, loan application verification) requires a human to perform the tedious and mundane task of manually processing each application and verifying them before moving to the next step. Each step requires input from an expert in that area. In a multi-agent setup using CrewAI, each step is handled by a crew consisting of multiple agents. For instance, in loan application verification, one agent may verify the user’s identity through background checks on documents like a driving license, while another agent verifies the user’s financial details. This raises the question: Can a single crew (with multiple agents in sequence or hierarchy) handle all loan processing steps? While possible, it complicates the crew, requiring extensive temporary memory and increasing the risk of goal deviation and hallucination. A more effective approach is to treat each loan processing step as a separate crew, viewing the entire workflow as a graph of crew nodes (using tools like langGraph) operating sequentially or cyclically. Since LLMs are still in their early stages of intelligence, full workflow management cannot be entirely autonomous. Human-in-the-loop is needed at key stages for end-user verification. For instance, after the crew completes the loan application verification step, human oversight is necessary to validate the results. Over time, as confidence in AI grows, some steps may become fully autonomous. Currently, AI-based workflow management functions in an assistive role, streamlining tedious tasks and reducing overall processing time. Production challenges Bringing multi-agent solutions into production can present several challenges. Scale: As the number of agents grows, collaboration and management become challenging. Various frameworks offer scalable solutions — for example, Llamaindex takes event-driven workflow to manage multi-agents at scale. Latency: Agent performance often incurs latency as tasks are executed iteratively, requiring multiple LLM calls. Managed LLMs (like GPT-4o) are slow because of implicit guardrails and network delays.

Why multi-agent AI tackles complexities LLMs can’t Read More »

11:11 Systems: Empowering enterprises to modernize, protect, and manage their IT assets and data

In 2020, 11:11 CEO Brett Diamond noticed a gap in the market. Virtually every company relied on cloud, connectivity, and security solutions, but no technology organization provided all three. Diamond founded 11:11 Systems to meet that need – and 11:11 hasn’t stopped growing since.  Leaders across every industry depend on its resilient cloud platform operated by a team of industry veterans and experts with extensive networking, connectivity, and security expertise. “Our valued customers include everything from global, Fortune 500 brands to startups that all rely on IT to do business and achieve a competitive advantage,” says Dante Orsini, chief strategy officer at 11:11 Systems. “We provide enterprises with one platform they can rely on to holistically address their IT needs today and in the future and augment it with an extensive portfolio of managed services – all available through a single pane of glass. We believe that IT teams, and the operational leaders and business functions they support, need a partner like 11:11 Systems that is capable of modernizing, protecting, and managing their entire IT estate.” Orsini notes that it has never been more important for enterprises to modernize, protect, and manage their IT infrastructure. He points to the ever-expanding cyber threat landscape, the growth of AI, and the increasing complexity of today’s global, highly distributed corporate networks as examples. “Many organizations are at an inflection point where they see the value in AI and realize it may have the potential to radically improve their business, but they need an experienced partner to guide them to modernize the systems that effective AI programs require,” adds Orsini. “They also know that the attack surface is increasing and that they need help protecting core systems. They are intently aware that they no longer have an IT staff that is large enough to manage an increasingly complex compute, networking, and storage environment that includes on-premises, private, and public clouds. We enable them to successfully address these realities head-on.” 11:11 Systems offers a wide array of connectivity services, including wide area networks and other internet access solutions that exceed the demanding requirements that a high-performance multi-cloud environment requires.  It also delivers security services and solutions – including best-in-class firewalls, endpoint detection and response, and security information and event management – needed to address the most stringent cyber resiliency requirements. Notably, the company’s extensive cloud solutions portfolio, including the 11:11 Public Cloud and 11:11 Private Cloud, draws on those offerings and includes numerous services, such as Infrastructure-as-a-Service, Backup-as-a-Service, Disaster Recovery-as-a-Service, and full multi- and hybrid cloud capabilities.  These ensure that organizations match the right workloads and applications with the right cloud. Orsini also stresses that every organization’s optimal cloud journey is unique. “We look at every business individually and guide them through the entire process from planning to predicting costs – something made far easier by our straightforward pricing model – to the migration of systems and data, the modernization and optimization of new cloud investments, and their protection and ideal management long-term,” he says. “We also offer flexible month-to-month bridge licensing options for existing hardware, giving customers time to make informed long-term decisions for their business. And throughout all of this, we enable them to draw on the VMware assets they know and trust.“ Justin Giardina, CTO at 11:11 Systems, notes that the company’s dedicated compliance team is also a differentiator. It offers oversight capabilities that exceed the requirements of industry bodies like the Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act, and Europe’s General Data Protection Regulation. “At 11:11 Systems, we go exceptionally deep on compliance,” says Giardina. “We encourage customers to look at our data centers, review our compliance controls, and see how our support tickets are processed – a key point in data sovereignty – all while using a platform that delivers incredible visibility.” A network built by architects for architects “In addition to centralizing cloud, connectivity, and security offerings, we built our platform to address the needs of organizations with thousands of applications,” adds Giardina. “It also offers exceptional transparency. So, if a customer wants to monitor or see everything that is happening across their locations, like CPU ready times or latency, that intelligence is readily visible.” Giardina notes that VMware by Broadcom technologies are used throughout the platform. “VMware’s technologies are at the core,” he says. “Administrators often take things like high availability that are native to VMware’s offerings for granted. Even out of the box, they enable incredible resiliency, which is why some customers move to our platform from hyperscalers. It’s also far easier to migrate VMware-based systems to our VMware-based cloud without expensive retooling while maintaining the same processes, provisioning, and performance.” 11:11 Systems offers Catalyst, an application it developed that allows customers to look at their existing infrastructure, identify what workloads need to migrate to the cloud, and complete an analysis that identifies any challenges that must be addressed upfront, including how long it will take to move the data, and other variables. It’s another way that Orsini believes a VMware-based infrastructure supports success in the cloud. “Many are not yet familiar with VMware Cloud Foundation (VCF), but you won’t find a better environment in which to run a production application,” he says. “At 11:11, we offer real data on what it will take to migrate to our platform and achieve multi- and hybrid cloud success. For customers who are unprepared to upgrade and are considering exiting the data center business, our expertise and platform can help navigate the transition effectively and drive the proper outcome. In Catalyst, they can see what a successful plan looks like. And while we believe we’ve built the best platform, we also thrive helping customers that need to use hyperscalers. We enable them to bring everything together so that their multi-cloud infrastructure addresses the most demanding business continuity and cyber resilience requirements.”  For more information on 11:11 Systems visit here. Look to CIO.com for stories about the industry-leading providers in the Broadcom Advantage Program and

11:11 Systems: Empowering enterprises to modernize, protect, and manage their IT assets and data Read More »

What is a Passkey? Definition, How It Works and More

A passkey is a specific authentication method that can be used as commonly as a password but to provide additional security. Passkeys differ from passwords as they combine private and public cryptographic keys to authenticate users, whereas a password relies on a specific number of characters. According to Google, the most immediate benefits of passkeys are that they’re phishing-resistant and spare people the headache of remembering numbers and special characters in passwords. As passwordless authentication continues to evolve — in response to phishing-related risks — consider using passkeys to implement an added layer of security to protect your online accounts and data. This article will define passkey technology, explore how it works, and discuss the added security benefits of using a passkey. 1 NordPass Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Medium (250-999 Employees), Enterprise (5,000+ Employees), Large (1,000-4,999 Employees), Small (50-249 Employees) Micro, Medium, Enterprise, Large, Small Features Activity Log, Business Admin Panel for user management, Company-wide settings, and more 2 Dashlane Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Medium (250-999 Employees), Enterprise (5,000+ Employees), Large (1,000-4,999 Employees), Small (50-249 Employees) Micro, Medium, Enterprise, Large, Small Features Automated Provisioning 3 Scalefusion Single Sign-On Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Access Management, Compliance Management, Credential Management, and more What is a passkey? A passkey refers to a code or a series of characters used to gain access to a secured system, device, network, or service. Passkeys are often used in conjunction with usernames or user IDs to create two-factor authentication. SEE: How to Create an Effective Cybersecurity Awareness Program (TechRepublic Premium) After you’ve established a passkey, all you need to do is log in to complete the authentication process, typically using biometric data such as a fingerprint or facial recognition. For those who utilize a passkey, logging in becomes a simple, nearly automatic process; for malicious actors, it becomes nearly impossible. The implementation of passkeys is highly adaptable since they may be configured to be cloud-synced or hardware-bound, contingent on the user’s choices regarding the particular application, service, or device. How do passkeys work? When logging in for the first time, a user who wants to access an app or website with passkey technology — such as NordPass — will be asked to generate an original passkey. This passkey, which will be required for authentication in the future, can be accessed using either biometrics or personal PINs based on the user’s selection and the capabilities of their preferred device. Figure A NordPass can automatically create a passkey for a website account. Image: Lance Whitney/TechRepublic During this stage, two mathematically linked cryptographic keys are generated: a public key that stays with the website, service, or application but is connected to the account, and a private key that stays on the user’s hardware or cloud account. How do you sign in with a passkey instead of a password? Passkey authentication is done in the background, making login on the user’s end seamless — with just the click of a button. Figure B You can easily log into a site with your associated passkey. Image: Lance Whitney/TechRepublic The service or application will send a randomly generated “challenge” to the user’s device during logins, which the user must react to by signing in with the private key. SEE: Passkey Adoption Is Accelerating in APAC — Except for Australia (TechRepublic) The app or website can confirm the legitimacy of the private key by utilizing the corresponding public key to confirm the response. Access is allowed, and authentication is validated if the user’s verified signature attached to the challenge’s response agrees with the original randomly generated challenge; if not, access is denied. More cloud security coverage How are passkeys different from passwords? The most critical differences between passkeys and passwords include: Passwords can be illicitly obtained through brute-force hacking, social engineering, and data breaches, whereas passkeys are more difficult (though not impossible) to steal. Hackers would need to physically steal your device or breach your cloud account and guess your PIN or find a way to bypass your biometric authenticator. Secure password usage requires users to generate and remember many complex credentials or employ a password manager, which has its own challenges and risks. Passkeys automatically authenticate users with their device’s unlock mechanisms, making them much simpler and more convenient to use. Passwords can be used across any device without any additional setup, but passkeys are usually bound to specific hardware. A cloud-based passkey solution may work across multiple devices, but users should be aware that their private keys will be stored on someone else’s servers instead of locally. What are the benefits of using a passkey? Unique logins: A password is reused every time you log in to a particular account, which means any malicious actor who gets their hands on it will have unfettered access. Passkeys, on the other hand, use cryptographic key pair technology to create unique authentication credentials for every login, giving hackers nothing to “guess” or steal. Passkeys are resistant to brute force attacks and social engineering methods like phishing, plus they can’t be exposed in a data breach. Added security layer: Passkeys use your device’s authenticator, such as a biometric login or PIN code, as a sort of built-in 2FA that protects your account. Whether your private key is stored locally on your device or in the cloud, a would-be hacker would need to authenticate with your device before gaining access to it and compromising your account. User convenience: Passkeys don’t need to be memorized or periodically changed, and logging in with them requires a single button press, providing a much more streamlined experience than passwords. And, as I just mentioned, they include 2FA to better protect accounts, but they don’t require users to provide secondary authentication for each individual login — once you’ve

What is a Passkey? Definition, How It Works and More Read More »

GenAI Is A Land Of Confusion For Revenue Leaders

One of the benefits of the analyst’s role at Forrester is engaging with B2B leaders to understand their business challenges and priorities. This month, I was fortunate to spend time in person with a broad range of sales and operations leaders at Forrester’s B2B Summit EMEA and other events in London. Much of the conversations focused around how generative AI (genAI) is impacting and will impact go-to-market (GTM) efforts, specifically within sales. I came away from these valuable discussions with three clear conclusions: 1) Revenue leaders are confused by the pace of change around genAI. 2) Leaders are bombarded and frequently bamboozled by vendor hype around their new AI ‘game-changer’ for sales. 3) Sales and operations leaders are slow and hesitant to react because they lack a clear AI strategy — or even a broader technology strategy — to help understand AI changes and guide their decision making. Keeping up with the pace of AI changeBusy executives face the challenge of keeping up-to-date with the rapid changes in genAI, including the relentless pace of large language model innovations from Anthropic, OpenAI, Microsoft, and others. They must understand these changes and their broad implications, while addressing their own concerns (and those of others) about being left behind. They share concerns about other functions in their organizations moving faster, or competitors and peers finding ways to leverage AI innovation for competitive differentiation. This sense of ‘FOMO’ among revenue leaders is also partly cultivated by the messaging from providers of sales tech selling to these personas. AI snake oil messaging is core to sales confusionSeveral leaders expressed their frustration with the ambiguity around what’s being defined as AI across sales tech by providers. For example, leaders are cynical about attempts to dress up basic rule-based functionality as AI agents by providers who are desperate to differentiate in the market and garner the attention of sales leaders — think of it as ‘agent washing‘. As a result, revenue leaders are struggling to separate fact from fiction, or separate tomorrow’s vision (e.g., fully autonomous multi-agent workflows) from today’s reality (most newly launched agents are simple reflex models responding to triggers with predefined responses). AI ambiguity distracts from strategic clarityAcross these conversations, it became obvious that there was frequently an absence of strategy, not just in terms of AI, but more broadly with regards to technology management and the need to envision, design, deliver, and evolve solutions that meet the changing needs of B2B organizations. The ambiguity around AI and resulting market confusion isn’t helping; it distracts sales leadership from focusing on strategic foundations. In what is a highly confusing and fast-paced environment, AI creates new pressure for impactful technology leadership in operations to help guide GTM AI investment. Taking control of your AI and tech strategyIt’s critical that revenue leaders take ownership for developing a proactive strategy for applying AI for performance impact. It’s also critical to separate reality from vision (or hype) in order to make the right decisions moving forward. Don’t wait for others in your organization to address your needs and use cases for you. Forrester recommends three steps to get started: Put rev ops in charge of your go-to-market AI strategy. Rev ops is the glue across GTM functions — its primary purpose is to unify data, insights, technology, and processes. Rev ops is ideally positioned to leverage AI to enable the unification of buyer and customer orchestration efforts. Help elevate the tech capabilities of your ops team. Like any other tech, AI must deliver value, from enhancing buyer experiences and perceptions of value to transforming frontline productivity and effectiveness. Delivering against this requires operations teams to not only balance technical capabilities with delivery and management, but also requires strategic vision, change management, and the ability to demonstrate and communicate investment value. Define and align your AI themes and use cases. Creating key themes for AI provides an opportunity to define your goals and objectives clearly, align with strategic initiatives, and avoid the many distractions surrounding AI. Define and prioritize your specific use cases under themes to provide clarity and purpose for communicating to stakeholders and building support and buy-in. If you could do with help bringing clarity to your understanding of AI’s impact on sales or if you’d like to talk further about your AI or tech strategy, please reach out to me at [email protected]. source

GenAI Is A Land Of Confusion For Revenue Leaders Read More »

Cloud Co. To Pay $300K Over FCC Subsidy Fund Paperwork

By Nadia Dreid ( November 1, 2024, 8:45 PM EDT) — Cloud communication company Fuze Inc. is going to be shelling out $300,000 to the Federal Communications Commission for not following certain rules related to Universal Service Fund contributions, the agency said Friday…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Cloud Co. To Pay $300K Over FCC Subsidy Fund Paperwork Read More »

How To Turn an IT Disruption to Your Advantage

When it comes to service outages, data breaches or systems failures, there is typically only one thing IT leaders can say with total confidence: This won’t be the last time.   No matter how effective your tech and security investments are, risks cannot be entirely precluded. And if a crisis does occur, it can not only demonstrate your effective planning — but it can also be turned to your advantage.   For example, let’s turn to an area that few people would associate with an appetite for risky behavior. Economics is famous for advancing “one crisis at a time” and CIOs could benefit by taking a leaf out of its (many) books. Like firefighters battling a fire, the immediate job for CIOs is to put out the flames. Once the situation is under control, teams need to forensically identify what sparked it off in the first place, so that they can help prevent future issues.   Events such as a major outage are a signal for CIOs to consider:  Using the crisis as a catalyst to make the case for funding for cost avoidance, risk mitigation and foundation investments;  Focusing on who are your partners and who are your vendors;   Analyzing your non-financial response measures   A Crisis as Catalyst   One of the toughest things for CIOs is to get funding for issues that haven’t yet occurred. Who among us likes paying for problems that haven’t yet happened? Cost avoidance can seem hypothetical versus solving problems that are clear and present dangers.   Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI The costs for those immediate challenges are usually more obvious. And mitigating the issues before they occur may be much more cost-effective and these risks may be much more than financial.   CIOs can not only learn from the experience of others, but they can use it to show how their tech investments are securing their business and saving them financial and reputational damage.  Partners or Vendors?   There is a major shift to outcome-based contracts in the tech industry, largely driven by developments such as generative AI. The latter is now realizing its potential to boost productivity, which is altering traditional value propositions. Traditionally, vendors were paid based on the volume of work or hours done. However, the new model emphasizes compensation based on the achievement of specific outcomes, such as cost reductions or revenue improvements. This approach not only aligns the vendor’s incentives with the client’s goals, but it also means that vendors take on a greater share of risk. The subtle shift here is away from the typical buyer-vendor relationship toward a technology partnership. Of course, this can be a powerful thing: Both organizations become deeply invested in the project’s success, sharing in both the risks and rewards. Related:Curtail Cloud Spend With These Strategies However, it won’t eliminate all risk. Both the client and vendor need to think clearly about how their collaboration will work beyond just the technical capabilities. Both organizations will need to consider if their corporate cultures are aligned, along with their shared vision for the project. Tech leaders need to consider if their potential partners are not only capable of driving innovation but are also culturally attuned to foster a collaborative and sustainable relationship. They may likely be sharing headlines together — in good times or bad. So how will that feel, and to what degree will it impact each company’s brand, identity and customer trust?    It is imperative for companies to establish clear risk-mitigation strategies. And they will need to have agreed, robust frameworks that ensure continuity and reliability, even when unexpected disruptions occur.  A Focus on the Non-Financials   When it comes to contracts, tech leaders focus on mitigating risk and discussions tend to revolve around “who pays if X goes wrong.” There is one aspect, though, that they need to feel total ownership over: their company’s public reputation.  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness Companies need to have thorough restorative plans for when something does go wrong — including how they mitigate customer and public impact and perception. In the wake of a crisis, CIOs need to consider how they respond immediately so that customers and clients feel supported.   To effectively prepare for inevitable incidents, companies need a rapid reaction plan. This plan should prioritize customer support, ensuring that clients feel adequately supported during disruptions.   What’s on the CIO’s Mind   There is an evolution taking place in the role of the CIO. As the role of technology has expanded in business, it has created an isomorphic effect in the IT function. The CIO’s role has also grown in importance and scope: They need to consider how to grow from being an effective cost manager to being a growth driver. Similarly, that shift toward outcomes-based contracts means that tech leaders are now being judged on costs but also on non-financial outcomes.   In my experience, ambitious tech leaders typically operate in two mindsets.   Like most C-suite leaders, to some extent they are considering their next step or role, perhaps with a bigger organization. That may be beyond the CIO function or they may be considering evolving that function to a new form altogether.   The second mindset is more concerned with legacy: As a C-suite leader, they don’t just want to keep the lights on, they want to make an impact that their name will be tied to. This will be a major move, driven by technology and perhaps requiring an acquisition, a significant investment, a people transformation program, or all of the above.   Whether you are trying to reach the next level or create a legacy, having the right risk-mitigation strategy, partners, and both financial and non-financial response is a key building block. A little trouble might just be what you need to reach your goals.  source

How To Turn an IT Disruption to Your Advantage Read More »

Google’s AI system could change the way we write: InkSight turns handwritten notes digital

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A centuries-old technology — pen and paper — is getting a dramatic digital upgrade. Google Research has developed an artificial intelligence system that can accurately convert photographs of handwritten notes into editable digital text, potentially transforming how millions of people capture and preserve their thoughts. The new system, called InkSight, represents a significant breakthrough in the long-running effort to bridge the divide between traditional handwriting and digital text. While digital note-taking has offered clear advantages for decades — searchability, cloud storage, easy editing, and integration with other digital tools — traditional pen-and-paper note-taking remains widely preferred, according to the researchers. A page from “Alice in Wonderland” shown in its original form (left) and after digital conversion by Google’s InkSight AI (right), demonstrating the system’s ability to preserve the natural character of handwritten text while making it digital. (Credit: Google) How Google’s new AI system understands human handwriting better than ever before “Digital note-taking is gaining popularity, offering a durable, editable, and easily indexable way of storing notes in the vectorized form,” Andrii Maksai, the project lead at Google Research, explained in the paper. “However, a substantial gap remains between this way of note-taking and traditional pen-and-paper note-taking, a practice still favored by a vast majority.” What makes InkSight revolutionary is its approach to understanding handwriting. Previous attempts to convert handwritten text to digital format relied heavily on analyzing the geometric properties of written strokes — essentially trying to trace the lines on the page. InkSight instead combines two sophisticated AI capabilities: the ability to read and understand text, and the ability to reproduce it naturally. The results are remarkable. In human evaluations, 87% of the samples produced by InkSight were considered valid tracings of the input text, and 67% were indistinguishable from human-generated digital handwriting. The system can handle real-world scenarios that would confound earlier systems: poor lighting, messy backgrounds, even partially obscured text. “To our knowledge, this is the first work that effectively de-renders handwritten text in arbitrary photos with diverse visual characteristics and backgrounds,” the researchers explain in their paper published on arXiv. The system can even handle simple sketches and drawings, though with some limitations. The same multilingual birthday note shown in three stages: the original handwriting (left), InkSight’s word-level analysis with color-coded processing (center), and the final digitized version with preserved character strokes (right). The system maintains the personal style of handwriting across Chinese, English and French text. (Credit: Google) Why handwriting still matters in our digital age, and how AI could help preserve it The technology arrives at a crucial moment in the evolution of human-computer interaction. Despite decades of digital advancement, handwriting remains deeply ingrained in human cognition and learning. Studies have consistently shown that writing by hand improves memory retention and understanding compared to typing. This has created a persistent challenge for technology adoption in education and professional settings. “Our work aims to make physical notes, particularly handwritten text, available in the form of digital ink, capturing the stroke-level trajectory details of handwriting,” Maksai says. “This allows paper note-takers to enjoy the benefits of digital medium without the need to use a stylus.” The implications extend far beyond simple convenience. In academic settings, students could maintain their preferred handwritten note-taking style while gaining the ability to search, share, and organize their notes digitally. Professionals who sketch ideas or take meeting notes by hand could seamlessly integrate them into digital workflows. Researchers and historians could more easily digitize and analyze handwritten documents. Perhaps most significantly, InkSight could help preserve and digitize handwritten content in languages that historically have limited digital representation. “Our work could allow access to the digital ink underlying the physical notes, potentially enabling the training of better online handwriting recognizers for languages that are historically low-resource in the digital ink domain,” notes Dr. Claudiu Musat, one of the project’s researchers. From breakthrough to real-world application: The technical architecture and future of digital note-taking The technology’s architecture is notably elegant. Built using widely available components, including Google’s Vision Transformer (ViT) and mT5 language model, InkSight demonstrates how sophisticated AI capabilities can be achieved through clever combination of existing tools rather than building everything from scratch. Google has released a public version of the model, though with important ethical safeguards. The system cannot generate handwriting from scratch — a crucial limitation that prevents potential misuse for forgery or impersonation. Current limitations do exist. The system processes text word by word rather than handling entire pages at once, and occasionally struggles with very wide stroke widths or significant variations in stroke width. However, these limitations seem minor compared to the system’s achievements. The technology is available for public testing through a Hugging Face demo, allowing users to experience firsthand how their handwritten notes might translate to digital form. Early feedback has been overwhelmingly positive, with users particularly noting the system’s ability to maintain the personal character of handwriting while providing digital benefits. While most AI systems seek to automate human tasks, InkSight takes a different path. It preserves the cognitive benefits and personal intimacy of handwriting while adding the power of digital tools. This subtle but crucial distinction points to a future where technology amplifies rather than replaces human capabilities. In the end, InkSight’s greatest innovation might be its restraint — showing how AI can advance human practices without erasing what makes them human in the first place. source

Google’s AI system could change the way we write: InkSight turns handwritten notes digital Read More »