Imec spins out Vertical Compute memory chip firm in $20.5M deal

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Europe’s Imec.xpand is spinning out memory chip firm Vertical Compute in a seed investment round worth $20.5 million. Founded by CEO Sylvain Dubois (ex-Google) and CTO Sebastien Couet (ex-imec), today announced that it successfully closed a seed investment $20.5 million, or 20 million euros. The round was led by Imec.xpand and supported by a strong investor base including Eurazeo, XAnge, Vector Gestion and imec. The funding will support Vertical Compute’s ambition to develop a novel vertical integrated memory and compute technology, unlocking a new generation of AI applications. Vertical Compute’s technology will have a transformative impact, enabling next-generation applications with unparalleled efficiency and privacy. By minimizing data movement and bringing large data closer to computation, the innovation ensures energy savings of up to 80%, unlocks hyper-personalized AI solutions, and eliminates the need for remote data transfers, protecting user privacy. “Memory technologies face limitations in both density and performance scaling, while processor performance continues to surge. The extreme data access requirements of AI workloads exacerbate this challenge, making it imperative to overcome the memory wall to enable the next wave of AI innovations. We believe going Vertical is the path to 100X gains”, said Sébastien Couet, CTO of Vertical Compute, in a statement. Tackling the Memory Wall The rapid advancements in large language models and generative AI are transforming virtually all industries at an unprecedented pace. However, these large-scale AI models still heavily rely on complex cloud infrastructure and high bandwidth memories, leading to data transfer latency, high energy consumption and sending sensitive data to distant servers. Edge computing can address these issues, but inferencing large AI models on smartphones, PCs or smart home devices faces significant cost, power and scalability constraints. The big underlying problem is the ‘memory wall’. Static Random Access Memory (SRAM), integrated as caches of the CPU or GPU, is fast but very small and expensive. Dynamic Random-Access Memory (DRAM), the main memory of computer systems, is larger but expensive and energy consuming. The scaling of both memory technologies in density and performance is slowing down while processor speeds and market needs keep increasing, causing a significant bottleneck. This problem is rapidly escalating due to the surging demand for AI workloads, requiring vast amounts of data to be accessed quickly. Overcoming this memory wall is crucial for advancing AI inference. Innovating with Vertical Compute’s Chiplet Technology Vertical Compute is spinning out of Imec. The convergence of large-scale AI models and edge computing calls for a transformative shift in the way data is processed. Vertical Compute will capture this opportunity by developing chiplet-based solutions — which take a modular approach to chip design — leveraging a new way to store bits in a high aspect ratio vertical structure. The concept behind Vertical Compute’s core patented technology has been invented by Sebastien Couet, Imec’s former Magnetic Program Director. The core innovation resides in the integration of vertical data lanes on top of computation units. It has the potential to outperform DRAM in terms of density, cost and energy, by reducing data movements from centimeters to nanometers. This promising technology, coupled with an ambitious commercialization plan, has led to the creation of this new semiconductor venture. “The surge in data-intensive applications like generative AI demands a drastic new approach to transferring data between computing cores and memory units. Our solution is designed to overcome the fundamental scaling limitations of memory technologies by going vertical. We are committed to unlocking the full potential of large language models on the edge without any compromise,” said Sylvain Dubois, CEO of Vertical Compute, in a statement. “We want to recruit the very best from all over Europe and finally put Europe at the forefront in terms of tech”, said Dubois. Driving Recruitment and Growth Vertical Compute is headquartered in Louvain-La-Neuve (BE), with its main R&D offices in Leuven (BE), Grenoble (FR) and Nice (FR). The company is recruiting an elite team of engineers to support its ambitious R&D goals and accelerate the development and commercialization of its chiplet-based technology. This seed investment round highlights the confidence in the leadership team’s capabilities and the disruptive potential of this game-changing technology. We could not be more excited to collaborate with Sylvain, Sebastien and their team and to help them to achieve their ambitious goals”, said Tom Vanhoutte from Imec.xpand, in a statement. “We are confident that, with the ongoing support of our teams and ecosystem, Vertical Compute can become a disruptor in the semiconductor industry. The strong international investor base shows that we are not alone in this belief,” said Patrick Vandenameele, co-COO at Imec, in a statement. Vertical Compute was founded in 2024 to solve the memory bottleneck in computer systems. source

Imec spins out Vertical Compute memory chip firm in $20.5M deal Read More »

10 AI strategy questions every CIO must answer

It’s a particularly relevant question now, as governments consider more AI regulations, the courts deal with AI-related cases, and society grapples with the real-world sometimes tragic consequences of the technology. Sack says companies need to consider what ethical, legal, and compliance implications could arise from their AI strategies and use cases and address those earlier rather than later. “Ethical, legal, and compliance preparedness helps companies anticipate potential legal issues and ethical dilemmas, safeguarding the company against risks and reputational damage,” he says. “If ethical, legal, and compliance issues are unaddressed, CIOs should develop comprehensive policies and guidelines. Additionally, they should consult with legal experts to navigate regulations and establish oversight committees.” 9. What’s our risk tolerance, and what safeguards are necessary to ensure safe, secure, ethical use of AI? Manry says such questions are top of mind at her company. “At Vanguard, we are focused on ethical and responsible AI adoption through experimentation, training, and ideation,” she says. “Resulting from senior leader and crew [employee] perspectives, our primary generative AI experimentation thus far has focused on code creation, content creation, and searching and summarizing information.” She advises others to take a similar approach. “CIOs must assess risk tolerance and implement safeguards for generative AI to address safety, security, and ethical concerns. By establishing healthy safeguards like data protection protocols and ethical guardrails, CIOs ensure responsible AI use and minimize risks,” she says. “Establish an AI governance framework that defines the organizations risk tolerance, and patterns of acceptable use based on data sensitivity, allowing low risk generative AI use cases to be fast-tracked while applying more rigorous evaluation on higher-risk applications. “This approach enables teams to innovate safely and efficiently, while ensuring more rigorous safeguards for use cases involving sensitive data. By implementing robust security measures, bias mitigation techniques, and an ethical review process, CIOs can minimize risks and ensure responsible use of AI.” Not all organizations are there yet, though: Data governance research from Lumenalta, which delivers custom digital solutions, found that only 33% of organizations have implemented proactive risk management strategies for AI governance. 10. Am I engaging with the business to answer questions? CIOs shouldn’t be going it alone, says Sesh Iyer, managing director, senior partner and North America co-chair of BCG X, the tech build and design division of Boston Consulting Group. “CIOs must ask themselves whether they are engaging with the business to deliver value with generative AI, whether there is a clear focus on gen AI with a defined pathway to achieving a meaningful return on investments within 12 months, whether they are leveraging the power of the digital ecosystem to support their gen AI agendas, [and] whether they have a clear plan to extract and use data at scale to achieve these goals,” Iyer says. “These questions are crucial for CIOs to ensure they are delivering value, targeting spend effectively to achieve returns, and considering velocity-to-value — leveraging intellectual property and products from a broader ecosystem to reach value faster. Also, they must determine whether they have the ‘digital fuel’ (i.e., data and infrastructure) needed to achieve these AI-driven outcomes.” He advises CIOs to “sit down with the business to devise or refine an integrated ambition agenda” and “develop clear business cases that demonstrate returns within 12 months, establish a robust ecosystem strategy, and actively engage with partners to maximize value.” source

10 AI strategy questions every CIO must answer Read More »

Google Colab vs Jupyter Notebook: Key Differences Explained

Creating, organizing, and sharing computation documents is essential in programming and data sciences. Most people turn to one of two popular tools — Google Colab and Jupyter Notebook — to help them manage their files. SEE: Learn how to become a data scientist. Image: Google Colab What is Google Colab? Google Colab is a tool offered by Google Research that allows users to write and execute Python code in their web browsers. Colab is based on Jupyter open source and allows you to create and share hosted computation files in the cloud without downloading or installing anything. Image: Jupyter What is Jupyter Notebook? Jupyter is the original free, open-source, web-based interactive computing platform spun from the IPython Project; Jupyter Notebook is a web application that allows users to create and share computation documents. 1 Quickbase Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees) Small, Medium, Large, Enterprise Features Agile Development, Analytics / Reports, API, and more Google Colab vs. Jupyter Notebook: Comparison table Software Google Colab Jupyter Notebook Starting price $9.99 per month Free Free plan Yes Yes Cloud based Yes No File syncing Yes No File sharing Yes No Library install No Yes File view without install Yes Yes Google Colab and Jupyter Notebook: Pricing Google Colab and Jupyter Notebook are both free to use. Jupyter Notebook was released as an open-source tool under the liberal terms of the modified BSD license, making it 100% free to use. Although Google Colab is also free, you may have to pay for advanced features as your computing needs increase. The following are the paid plans offered by Google Colab: Pay As You Go: For this plan, there are no fixed subscription fees; you only pay for what you use. Colab Pro: For $9.99 per month, you get 100 compute units, access to higher memory machines, and the ability to use a terminal with the connected virtual machine. Colab Pro+: For $49.99 monthly, you’ll get 500 compute units, faster GPUs, and background execution capability. Feature comparison: Google Colab vs. Jupyter Notebook Cloud-based Google Colab’s major differentiator from Jupyter Notebook is that it’s cloud-based, and Jupyter isn’t. If you work in Google Collab, you don’t have to worry about downloading and installing anything to your hardware. It also means that you can rest easily knowing that your work will autosave and back up to the cloud without you having to do anything. Google Colab homepage. Google Colab is great if you need to work across multiple devices — such as one computer at home and one at work or a laptop and a tablet — because it syncs seamlessly across devices. In contrast, Jupyter Notebook is run on your local machine, and files are saved to your hard disk. Jupyter offers an autosaving interval that you can change but doesn’t back up to a cloud. Therefore, if your machine is affected, you’re out of luck. Jupyter can’t sync or share your files across devices without a third-party file-sharing service like Dropbox or GitHub. Dashboard layout on Jupyter Notebook. Collaboration We couldn’t talk about Jupyter Notebook versus Google Colab without mentioning collaboration. As the name suggests, Google Colab is built to make it easy to share your notebooks with anyone — even if they’re not a data scientist. Other people can view your notebook without downloading any software — a big advantage if you regularly work with nontechies who need to access the files. Google Colab shareable dashboard for experiments. Conversely, anyone else must install Jupyter Notebook on their device to share their notebooks. This won’t be a hindrance if you solely work with developers, data scientists, and other tech people who will already have Jupyter installed. If you work on a more diverse team, then you might want to consider Google Colab because sharing files is easier. Library install Since Google Colab is cloud-based, the tool comes preinstalled with various libraries. This means that you don’t have to separate precious disk space or time to download the libraries manually. The free version also comes with a certain level of graphic processing units, memory, and run time, which can fluctuate. You can upgrade to one of the paid plans if additional capacity is needed. Google doesn’t disclose limits for any of its Colab plans due to the need for flexibility. With Jupyter Notebook, you’ll need to install each library you’d like to use onto your device using pip or another package manager. You’ll also be limited by your computer’s available RAM, disk space, GPU, and CPU. Having the notebooks stored on your hardware is more secure than in a third-party cloud. Therefore, the manual library installation can be a plus for sensitive data. R Scripts Both Google Colab and Jupyter Notebook allow users to run R scripts, though they are primarily designed for Python. In Google Colab, users can now select to work with R by selecting it within the Runtime menu. For Jupyter Notebook, users must install an R kernel to work with R on their computer. Google Colab pros and cons Pros Straightforward interface that’s easy to navigate. Access GPU and TPU runtimes for free. Import compatible machine learning and data science projects from other sources. Automatic version control similar to Google Docs. Real-time collaboration capability. Integrates with other tools, including GitHub, Jupyter Notebook, BLACKBOX AI, Codeium, CodeSquire, Google Workspace, Neptune.ai, StrongDM, Google Drive and more. Cons The free plan gives you limited resources. Some users reported issues with the speed of loading new databases and data frames that are present offline. Jupyter Notebook pros and cons Pros Modern, intuitive, and interactive user interface. Supports markdown language for documentation. Interactive interface makes it easy for users to share images, code, and text in one place. Supports multiple programming languages, including Python, R, and Julia. Cons Some users reported that the software gets slow or crashes sometimes when working with large datasets or carrying

Google Colab vs Jupyter Notebook: Key Differences Explained Read More »

Africa's digital economy and digital transformation

With rich resources like a growing physical infrastructure and subsea cable network, Africa is uniquely positioned to emerge as a leader among today’s developing economies. A key factor in this potential is the improvement of  internet connection in Africa, which is central to facilitating the continent’s digital transformation. The African Union commits to growing Africa’s already burgeoning digital economy through The Digital Transformation Strategy for Africa (2020-2030), stating: “Innovations and digitalization are stimulating job creation and contributing to addressing poverty, reducing inequality, facilitating the delivery of goods and services, and contributing to the achievement of Agenda 2063 and the Sustainable Development Goals”1 Additionally, the continent is young with a median age of 20 years, and experiencing population growth with its 1.4 billion inhabitants making up 15% of the global population. This bodes well for growth in market size, GDP, and a population of digitally fluent consumers. Public and private sector efforts to boost Africa’s digital economy Global public and private institutions recognize Africa’s position as an emerging digital economy on the world stage. For instance, the U.S., European Union (EU), China, and India all have strategic programs in place for a solid digital infrastructure on the African continent. The foreign direct investment (FDI) sector financed $30 billion in sustainability projects, often referred to as “global greenfield megaprojects,” according to the UN’s latest World Investment Report. Barriers to Africa’s digital transformation In the near term, Africa’s land-based (terrestrial) infrastructure can hinder the strides forward many see on the horizon for the continent. For data center capacity to spread to more regions of Africa, there will need to be a major effort to create structure for overland routes. Additionally, Africa needs 500,000 kilometers of fiber-optic cable construction to connect the continent, says the International Finance Corporation (IFC). For enterprises to leap over these boundaries, they need partners with knowledge, sophistication, and a keen understanding of how business works from both a continental and a regional perspective, such as those offering specialized services like colocation in Africa. The future of greater digital access to the African economy In this article, we’ll provide an overview of Digital Realty’s capabilities to connect enterprises to the opportunities on the African continent, touching on topics such as: The growth of the digital economy in Africa Africa’s digital infrastructure both now and in the future Digital Realty’s unique positioning as a digital transformation leader in Africa Potential challenges and opportunities for leading enterprises expanding to the African continent First, we will highlight interesting developments and results from efforts to provide greater digital access to the African economy. The growth of the digital economy in Africa Since 2020, the African Union (AU) has partnered with public and private institutions to fund its goal of uniting the continent through universal internet access. This attracted billions of dollars for digital infrastructure investments in Africa. Here’s a summary of the results so far as researched by the World Bank: 115% – Between 2016 and 2021 internet users increased by 115% in Sub-Saharan Africa 160 million – The number of Africans who gained broadband access between 2019 and 2022 191 million – New recipients or senders of digital payments between 2014 and 2022 The African Union enacted a 10-year strategy to enhance Africa’s digital economy in February 2020. The release of the Digital Transformation Strategy for Africa attracted financial support from the World Bank which set off a series of funding initiatives spanning the globe and the public and private sectors. Government investment leads to growth of Africa’s digital economy AU efforts lead to World Bank investment. One year after the AU’s digital transformation strategy, the World Bank launched the All Africa Digital Economy Moonshot. This initiative aims to “digitally connect every individual, business, and government in Africa by 2030.” Results: By January 2024, World Bank closed on $731.8 million in financial commitments across 11 digital transformation projects across Sub-Saharan Africa. The organization also secured $2.8 billion for 24 more digital development projects since 2014. The EU launched the EU-Africa Global Gateway Investment Package of €150 billion in investments. In addition to sustainability, climate resilience, and biodiversity projects, the Global Gateway aims to fast-track universal access to reliable internet in Africa by 2030. Progress: The Global Gateway project features the AU-EU Digital4Development (D4D) Hub, connecting North Africa to EU countries with an extension into West Africa via Dakar, Senegal. (European Commission) The U.S. launched the Digital Transformation with Africa Initiative (DTA) in December 2022, committing $800 million to the continent’s digital transformation journey. (Carnegie Endowment for International Peace) Results: In its first year, the DTA funded $82 million in four all-Africa initiatives and more than 20 regional projects focused on country-specific goals. (Carnegie Africa analysis) Of particular interest is the investment in: Digital trade alliances Funding infrastructure of information and communications technology (ICT) Feasibility studies to expand Internet access to rural parts of Africa These efforts have also led to a cascade of private investments from some of the world’s largest technology enterprises. Future impact of public and private digital infrastructure investment in Africa One purpose of these investments is to leverage Africa’s unique status as the fastest growing continent by population and gross domestic product (GDP), according to United Nations (UN) and African Development Bank figures. The ultimate payoff will be Africa’s contribution of $180 billion in GDP to the global economy by 2025 and a potential $712 billion by 2050. Leading enterprises know the time is now to partner with experts with an established presence in Africa’s digital infrastructure transformation. Digital infrastructure critical to Africa’s data sovereignty Currently, Africa represents two percent of the world’s data center footprint which greatly affects the continent’s data sovereignty efforts.2 The data regulations landscape on the continent remains fluid, but it’s also a top priority within established data economies in Africa. For example, in 2023 the Data Protection Act became law in Nigeria, which provides data protection guardrails that previously did not exist. Africa’s data center landscapeThis push for data sovereignty and more stringent data regulation calls for enterprises to establish partnerships with an experienced multi-tenant data center (MTDC) operator with a wide

Africa's digital economy and digital transformation Read More »

Hyland CIO Stephen Watt on emerging purpose-built AI platforms

00:00 Hello. Good afternoon and welcome to CIO Leadership Live. I’m Maryfran Johnson, CEO of Maryfran Johnson Media, and the former editor in chief of CIO magazine and events. Since November 2017this video show and audio podcast has been produced by the editors of cio.com and the digital media division of foundry, which is an IDG company. Our sponsor for this episode today is Ohio based Hyland Software, which provides industry leading enterprise content management and process management software. Hyland’s content solutions empower its customers to deliver exceptional experiences by connecting their systems and managing high volumes of diverse content that accelerates and automates processes and workflows. Visit the hyland.com website to learn more. And now onward to today’s guest, which I’m very pleased to say is the Senior Vice President and CIO at Hyland Software, Stephen Watt, Steve is responsible for the global delivery of corporate IT services for Highlands, rapidly expanding employee base now encompassing more than 3500 Highlanders around the world. He joined Highland 20 years ago as an is infrastructure administrator, back when the company had only 185employees and an IT team of six people as the business grew rapidly and expanded globally, Steve played key roles in the deployment of new technologies that supported the company’s explosive growth. He served in several key roles during his long tenure, much of that focused on infrastructure, systems, process and policy, and he was promoted into the CIO role, officially in early 2021 prior to Hyland, he worked as an IT director in the education industry and an independent IT consultant. Steve, welcome and thanks for joining me here today. Thank you very much. Maryfran, it’s a pleasure to be here. All right. Excellent. Now it is, as we talked about earlier, as we were getting ready for this show today. It’s really rare I went and I think I called you a unicorn, and because it’s rare to find a CIO who has such a long tenure with a single company, talk about what has kept you challenged and engaged at Hyland software all of these years.Yeah, for me, personally, it’s been, it’s been quite a journey.You know, when I was working in the IT field, especially as I was finishing school,you know, I was planning on a career in electrical engineering, and I happened to get in contact with what Hyland was, and really over this time,you know, it’s a couple of specific things for me, I was really interested in the IT front on a company that was sort of in that mid market, that was on a growth trajectory, You know, striving to become enterprise.And that was very that was very attractive. And then really, what’s kept me here is that with our growth and everything, there’s never been a, you know, a slowdown in the number of challenges that we’ve had to solve. And I love solving interesting problems. And then the cliche answer of it is that I’ve truly enjoyed the people that I’ve worked with over this amount of time. I think we’ve all had those jobs where we worked with people that we did not enjoy, the people that you were working with on a day to day basis, but Highland, I think, has had a unique position to attract great people that I’ve really had an honor working with, and, you know, being a part of that journey with them. So that has been a big part of it, as well as just the pleasure I’ve had with my colleagues and, you know, solving big problems together is a great way to spend your time. Well, tell us too, a little bit about Hyland’s market, the people that buy the Hyland software that has expanded over time as well. Tell us a little bit about that customer base out there that has part of that explosive growth?Yeah, definitely. We sell into almost any vertical market that you can think of, whether it’s healthcare, financial services, commercial manufacturing. Our software is pretty ubiquitous in its application, and that’s just, you know, a lot of it is due to the configurability and the in the, you know, the number of solutions that we can offer our customers today, that is that has been exciting. Our customer base has continued to grow significantly since I’ve been with the organization. And it’s been interesting to see how that, how that presents itself comes with its own unique challenges as you enter into like highly regulated markets like health care or financialbut again, it’s it’s fun to solve those interesting challenges. So yeah, and around the world at this point, you’ve got somewhere around 15,000 customers. That is correct. Yeah, around 15,000 customers across two.Pretty much every vertical market you can think of. Okay, now let’s talk about the size and scope of your technology team. Those six people that you started out with back 20 years ago is now about the size that the entire company was when you joined. It’s a upwards of 170 folks in it now. Yes, that is correct, yeah. So that’s been kind of a surreal experience of itself, and that growth of seeing that when you’re running a department that’s the same size as the entire organization was, kind of puts in perspective, you know, what the stakes are, and the fact that you shoulder a lot of responsibility to help the organization continue to move in the right direction, yeah. How have you, how have you structured that technology team to deliver the most value to the rest of the business? And I take it that’s changed a great deal over the 20 years. So what is your most kind of, the most recent snapshot of what your org structure looks like for the technology group? Yeah, you know, the way that we’re structured now is, is, it’s definitely not a unique perspective on on what we do, but we try to organize in sort of

Hyland CIO Stephen Watt on emerging purpose-built AI platforms Read More »

Understanding the Buyer's Journey: A Comprehensive Guide

A buyer journey is a sales process from the perspective of the customer. It refers to the buyer’s mindset when identifying their problem, comparing possible solutions, and making a purchasing decision. By understanding the flow through this purchasing process, sellers can engage appropriately and with the important information buyers want and need to make a purchase. 1 Pipedrive CRM Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Calendar, Collaboration Tools, Contact Management, and more 2 Creatio CRM Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees) Medium, Large, Enterprise Features Dashboard, Document Management / Sharing, Email / Marketing Automation, and more 3 HubSpot CRM Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees) Micro, Small, Medium, Large What is the buyer’s journey? The buyer’s journey is the process of a potential customer going from identifying a need to completing a purchase. Distinct buyer journey stages represent the customer’s mindset and decision-making process. An effective way to utilize sales software like a CRM system is to map out this process to create personalized and intentional content and triggers. Buyer journey mapping is a marketing and sales strategy that allows businesses to maintain a unified flow of information for sales reps that instructs them on how to engage with buyers depending on where they are in their journey. What are the 3 stages of the buyer’s journey? Regardless of your industry, both B2B and B2C selling strategies have three main stages in any buyer’s journey. The three stages of the buyer’s journey are awareness, consideration, and decision. These stages follow the buyer realizing their need, comparing potential options, and then choosing a solution. It’s important to have a clear understanding of the mindset of a buyer at each stage. This way, you can push content and nurture the lead appropriately to get them closer to the sale. Stage 1: Awareness The first stage of the customer buying journey is awareness. Awareness refers to when a buyer realizes they have a problem or a need. This is when a buyer begins to think about how this problem or need affects their life and how a solution could fix it. These phases might include basic research into other people’s experiences with this problem and potential solutions. To capture a buyer’s attention at this stage, I suggest avoiding coming off as too salesy. This isn’t the time to pitch your product directly. Instead, create resources and share information about the problem your solution solves. Examples of these resources include: Customer use cases: Highlighting real-world use cases on your website or social media can provide an unbiased look at the benefits and demonstrate the solution in action. Expert seminars: Hosting webinars with industry experts can position your company as a leader in the field and create opportunities for buyers to improve their skills. Knowledge bases: A public-facing knowledge base is a valuable resource that can assist buyers during the research portion of this first stage. Stage 2: Consideration The second stage of the buyer journey is consideration. Buyers in this stage are researching and comparing potential solutions more actively. A buyer is directly comparing your solution to your competitors at this point. They’re online looking at reviews, available pricing information, support packages, and more. At this point, buyers are more committed to remedying their problem. As a seller, I recommend expanding on their interest in your solution by providing newsletters, pricing transparency, and personalized touchpoints. Stage 3: Decision The third and final stage of the buyer journey is consideration. This is when a buyer is ready to make a purchasing decision. Buyers have considered price, real customer reviews, benefits, features, onboarding, and more. They know exactly what they want, and are ready to look ahead to implementation. This is when sellers need to close the deal. I suggest that you provide sales reps with documentation and training so they can practice handling objections and rebuttals. This preparation assures you are engaging the actual decision maker for the purchase and are prepared to answer any last-minute questions confidently. More about CRM Buyer vs customer journey While you might see the terms buyer journey and customer journey together, there is a difference between the two. A buyer journey focuses on the path a customer follows to complete a purchase, with the end goal being the sale. A customer journey can follow that same path but extends beyond the purchase and includes onboarding, support, and even customer retention. A buyer journey is meant to identify and obtain a customer, and a customer journey’s purpose is to retain and support those customers. Some key differences to help identify a buyer vs customer journey: Focus: A buyer journey requires a focus on the customer’s motivations and decision-making, while the customer journey focuses on their experience with the brand itself. Journey length: Since the buyer’s journey ends with a purchase, that timeline is much shorter. The customer journey is longer, including the customer’s lifetime journey with your brand. Buyer’s journey example To help demonstrate the different stages, I’ve compiled a B2B example of a buyer’s journey in the recruitment industry. The buyer is a potential staffing client who is looking for an employment agency to help fill a role. Awareness: The client realizes they need to hire a new employee to lead a big initiative for the company. The client will be the hiring manager and has been given an approved budget for the acquisition. Consideration: The client pitches this open position to various local staffing agencies. The client realizes they want someone immediately, and their timeline is pushed up. So they prioritize efficiency, skillset, and budget when selecting the best candidate. Decision: The client has interviewed four different agencies and is ready to select one. They

Understanding the Buyer's Journey: A Comprehensive Guide Read More »

Microsoft Rings in 2025 With Record Security Update

Microsoft’s January update contains patches for a record 159 vulnerabilities, including eight zero-day bugs, three of which attackers are already actively exploiting. The update is Microsoft’s largest ever and is notable also for including three bugs that the company said were discovered by an artificial intelligence (AI) platform.   Microsoft assessed 10 of the vulnerabilities disclosed this week as being of critical severity and the remaining ones as important bugs to fix. As always, the patches address vulnerabilities in a wide range of Microsoft technologies, including Windows OS, Microsoft Office, .NET, Azure, Kerberos, and Windows Hyper-V. They include more than 20 remote code execution (RCE) vulnerabilities, nearly the same number of elevation-of-privilege bugs, and an assortment of other denial-of-service flaws, security bypass issues, and spoofing and information disclosure vulnerabilities. Three Vulnerabilities to Patch Immediately Multiple security researchers pointed to the three actively exploited bugs in this month’s update as the vulnerabilities that need immediate attention. The vulnerabilities, identified as CVE-2025-21335, CVE-2025-21333, and CVE-2025-21334, are all privilege escalation issues in a component of the Windows Hyper-V’s NT Kernel. Attackers can exploit the bug relatively easily and with minimal permissions to gain system-level privileges on affected systems. Microsoft itself has assigned each of the three bugs a relatively moderate severity score of 7.8 out of 10 on the CVSS scale. But the fact that attackers are exploiting the bug already means organizations cannot afford to delay patching it. “Don’t be fooled by their relatively low CVSS scores of 7.8,” said Kev Breen, senior director threat research, Immersive Labs, in emailed comments. “Hyper-V is heavily embedded in modern Windows 11 operating systems and used for a range of security tasks.” Microsoft has not released any details on how attackers are exploiting the vulnerabilities. But it is likely that threat actors are using it to escalate privileges after they have gained initial access to a target environment, according to researchers. “Without proper safeguards, such vulnerabilities escalate to full guest-to-host takeovers, posing significant security risks across your virtual environment,” researchers at Automox wrote in a blog post this week. Five Publicly Disclosed but Not Yet Exploited Zero-Days The remaining five zero-days that Microsoft patched in its January update are all bugs that have been previously disclosed but which attackers have not exploited yet. Three of the bugs enable remote code execution and affect Microsoft Access: CVE-2025-21186 (CVSS:7.8/10), CVE-2025-21366 (CVSS: 7.8/10), and CVE-2025-21395. Microsoft credited AI-based vulnerability hunting platform Unpatched.ai for finding the bugs. “Automated vulnerability detection using AI has garnered a lot of attention recently, so it’s noteworthy to see this service being credited with finding bugs in Microsoft products,” Satnam Narang, senior staff research engineer for Tenable, wrote in emailed comments. “It may be the first of many in 2025.” The other two publicly disclosed but as yet unexploited zero-days in Microsoft’s January security update are CVE-2025-21275 (CVSS: 7.8/10) in Windows App Package Installer and CVE-2025-21308 in Windows Themes. Both enable privilege escalation to SYSTEM and therefore are high-priority bugs for fixing as well. Other Critical Vulns In addition to the zero-days there are several other vulnerabilities in the latest batch that also merit high-priority attention. Near the top of the list are three CVEs to which Microsoft has assigned near maximum CVSS scores of 9.8 out of 10: CVE-2025-21311 in Windows NTLMv1 on multiple Windows versions; CVE-2025-21307, an unauthenticated RCE flaw in Windows Reliable Multicast Transport Driver; and CVE-2025-21298, an arbitrary code execution vulnerability in Windows OLE. According to Ben Hopkins, cybersecurity engineer at Immersive Labs, Microsoft likely rated CVE-2025-21311 as critical because of the potentially severe risk it presents. “What makes this vulnerability so impactful is the fact that it is remotely exploitable, so attackers can reach the compromised machine(s) over the Internet,” he wrote in emailed comments. “The attacker does not need significant knowledge or skills to achieve repeatable success with the same payload across any vulnerable component.” CVE-2025-21307, meanwhile, is a use-after-free memory corruption bug that affects organizations using the Pragmatic General Multicast (PGM) multicast transport protocol. In such an environment, an unauthenticated attacker only needs to send a malicious packet to the server to trigger the vulnerability, Ben McCarthy, lead cybersecurity engineer at Immersive Labs, wrote in emailed comments. Attackers who successfully attack the vulnerability can gain kernel-level access to affected systems, meaning organizations using the protocol need to apply Microsoft’s patch for the flaw immediately, McCarthy added. Tyler Reguly, associated director of security R&D at Fortra, described CVE-2025-21298 — the third 9.8 severity bug — as an RCE flaw that an attacker would likely exploit via email rather than over the network. “The Microsoft Outlook preview pane is a valid attack vector, which lends itself to calling this a remote attack. Consider reading all emails in plaintext to avoid vulnerabilities like this one,” he noted in emailed comments. Microsoft’s January 2025 update is in stark contrast to January 2024’s update when the company disclosed just 49 CVEs. According to data from Automox, the company issued patches for 150 CVEs in April 2024, and for 142 in July. source

Microsoft Rings in 2025 With Record Security Update Read More »

Cerebras Systems teams with Mayo Clinic on genomic model that predicts arthritis treatment

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Cerebras Systems has teamed with Mayo Clinic to create an AI genomic foundation model that predicts the best medical treatments for people with reheumatoid arthritis. It could also be useful in predicting the best treatment for people with cancer and cardiovascular disease, said Andrew Feldman, CEO of Cerebras Systems, in an interview with GamesBeat. Mayo Clinic, in collaboration with Cerebras Systems, announced significant progress in developing artificial intelligence tools to advance patient care, today at the JP Morgan Healthcare Conference in San Francisco. As part of Mayo Clinic’s commitment to transforming healthcare, the institution has led the development of a world-class genomic foundation model, designed to support physicians and patients. Like Nvidia and other semiconductor companies, Cerebras is focused on AI supercomputing. But its approach is much different from Nvidia’s, which relies on individual AI processors. Cerebras Systems designs an entire wafer — with many chips on a single wafer of silicon — that collectively solve big AI problems and other computing tasks with much lower power consumption. Feldman said it took tens of such systems to compute the genomic foundation model over months of time. Still, that was far less time, effort, power and cost than traditional computing solutions, he said. PitchBook recently predicted that Cerebras would have an IPO in 2025. Cerebras Systems’ calculations can determine which treatment will work on a given patient with rheumatoid arthritis. Building on Mayo Clinic’s leadership in precision medicine, the model is designed to improve diagnostics and personalize treatment selection, with an initial focus on Rheumatoid Arthritis (RA). RA treatment presents a significant clinical challenge, often requiring multiple attempts to find effective medications for individual patients. Traditional approaches examining single genetic markers have shown limited success in predicting treatment response. The joint team’s genomic model was trained by mixing publicly available human reference genome data with Mayo’s comprehensive patient exome data. The human reference genome is a digital DNA sequence representing a composite, “idealized” version of the human genome. It serves as a standard framework against which individual human genomes can be compared, enabling researchers to identify genetic variations. In contrast to models trained exclusively on human reference genome, Mayo’s genomic foundation model demonstrates significantly better results on genomic variant classification because it was trained on data sourced from 500 Mayo Clinic patients. As more patient data is incorporated into training, the team expects continuous improvement in model quality. The team designed new benchmarks to evaluate the model’s clinically relevant capabilities, such as detecting specific medical conditions from DNA data, addressing a gap in publicly available benchmarks, which focus primarily on identifying structural elements like regulatory or functional regions. Cerebras Systems said its AI prediction for treatment is highly accurate. The Mayo Clinic Genomic Foundation Model demonstrates state-of-the-art accuracy in several key areas: 68-100% accuracy in RA benchmarks, 96% accuracy in cancer predisposing prediction, and 83% accuracy in cardiovascular phenotype prediction. These capabilities align to Mayo Clinic’s vision of delivering world leading healthcare through AI technology. More testing will need to be done to verify the results, Feldman said. “Mayo Clinic is committed to using the most advanced AI technology to train models that will fundamentally transform healthcare,” Matthew Callstrom, Mayo Clinic’s medical director for strategy and chair of radiology, in a statement. “Our collaboration with Cerebras enabled us to create a state-of-the-art AI model for genomics. In less than a year, we’ve developed promising AI tools that will help our physicians make more informed decisions based on genomic data.” “Mayo’s genomic foundation model sets a new bar for genomic models, excelling not only in standard tasks like predicting functional and regulatory properties of DNA but also enabling discoveries of complex correlations between genetic variants and medical conditions,” said Natalia Vassilieva, field CTO at Cerebras Systems, in a statement. “Unlike current approaches focused on single-variant associations, this model enables the discovery of connections where collections of variants contribute to a particular condition.” Cerebras Systems can parse the meaning of mutations. The rapid development of these models – typically a multi-year endeavor – was accelerated by training Mayo Clinic’s custom models on the Cerebras AI platform. The Mayo Genomic Foundation Model represents significant steps toward enhancing clinical decision support and advancing precision medicine. Cerebras’ flagship product is the CS-3, a system powered by the Wafer-Scale Engine-3. Advancing AI for chest X-rays Separately, Mayo Clinic today unveiled separate groundbreaking collaborations with Microsoft Research and with Cerebras Systems in the field of generative artificial intelligence (AI), designed to personalize patient care, significantly accelerate diagnostic time and improve accuracy. Announced during the J.P. Morgan Healthcare Conference, the projects focus on developing and testing foundation models customized for various applications, leveraging the power of multimodal radiology images and data (including CT scans and MRIs) with Microsoft Research and genomic sequencing data with Cerebras. The innovations have the potential to transform how clinicians approach diagnosis and treatment, ultimately leading to better patient outcomes.  Foundation AI models are large, pre-trained models capable of adapting to and carrying out many tasks with minimal extra training. They learn from massive datasets, acquiring general knowledge that can be used across diverse applications. This adaptability makes them efficient and versatile building blocks for numerous AI systems. Mayo Clinic and Microsoft Research are collaboratively developing foundation models that integrate text and images. For this use case, Mayo and Microsoft Research are working together to explore the use of generative AI in radiology using Microsoft Research’s AI technology and Mayo Clinic’s X-ray data. Empowering clinicians with instant access to the information they need is at the heart of this research project. Mayo Clinic aims to develop a model that can automatically generate reports, evaluate tube and line placement in chest X-rays, and detect changes from prior images. This proof-of-concept model seeks to improve clinician workflow and patient care by providing a more efficient and comprehensive analysis of radiographic images. The Mayo Clinic has 76,000 people

Cerebras Systems teams with Mayo Clinic on genomic model that predicts arthritis treatment Read More »

3 Strategies For a Seamless EU NIS2 Implementation

Businesses everywhere face pressures to enhance their security postures as cyberattacks across sectors rise. Even so, many organizations have been hesitant to invest in cybersecurity for a variety of reasons such as budget constraints and operational issues. The EU’s new Network and Information Security Directive (NIS2) confronts this hesitancy head on by making it mandatory for companies in Europe – and those doing business with Europe – to invest in cybersecurity and prioritize it regardless of budgets and team structures.   What Is NIS2?  The first NIS Directive was implemented in 2016, which was the EU’s endeavor to unify cybersecurity strategies across member states. In 2023, the commission introduced the NIS2 Directive, a set of revisions to the original NIS. Each member state was required to implement the NIS2 recommendations into their own national legal systems by October 17, 2024.  The original NIS focused on improving cybersecurity for several sectors, such as banking and finance, energy and healthcare. NIS2 expands that scope to other entities, including digital services, such as domain name system (DNS) service providers, top-level domain (TLD) name registries, social networking platforms and data centers, along with manufacturing of critical products, such as pharmaceuticals, medical devices and chemicals; postal and courier services; and wastewater and waste management.  Related:What Does Biden’s New Executive Order Mean for Cybersecurity? Organizations in these industries are now required to implement more robust cyber risk management practices like incident reporting, risk analysis and auditing, resilience/business continuity and supply chain security. For example, member states must ensure TLD name registries and domain registration services collect accurate and complete registration data in a dedicated database. The new regulations also strengthen supervision and enforcement mechanisms, requiring national authorities to monitor compliance, investigate incidents and impose penalties for non-compliance.  The goal of these new measures is to ensure the stability of society’s infrastructure in the face of cyber threats. Entities in the EU will benefit from adopting these security measures over the long run, better preventing a devastating cyberattack. In doing so, they will also avoid the NIS2 penalties, which are significantly more punitive and clearly defined than those created under the original directive.   Impact on Organizations  Much like how the European Union’s General Data Protection Regulation (GDPR) reset the standard for privacy globally, NIS2 sets clear requirements for businesses to establish stronger security defenses, but not without a cost. Failing to comply can lead to severe financial penalties and legal implications.   Related:Microsoft Rings in 2025 With Record Security Update The official launch of NIS2 in October was met with mixed reactions. While some organizations could testify, they had been preparing all along, many others had left NIS2 on the backburner. In addition, as a result of the new sectors covered by NIS2, there were businesses that did not initially believe they would be impacted and therefore had not laid their own groundwork.   All this said, it will be interesting to see how penalty enforcement plays out in 2025. If organizations don’t demonstrate compliance early in the new year, or at least show progress toward becoming compliant, I predict we will start to see consequences, though it may be too soon to tell which sectors will face them first.  To those still grappling with NIS2 implementation, it may understandably seem like a daunting task, but it does not have to be. Here are three actions organizations can take today to ensure a more seamless NIS2 implementation:   1. Evaluate your business partners.  NIS2 is not just about strengthening one business’ security; It also demands businesses thoroughly evaluate every entity they engage with in their supply chain. A chain is only as strong as its weakest link, and the same can be said for businesses and their partners’ security postures. It is essential for organizations to audit their partners to ensure every entity they do business with meets NIS2 requirements. Evaluating any security gaps now can help to avoid overlooked issues down the road.   Related:How CISOs Can Build a Disaster Recovery Skillset 2. Consolidate your domains.  We have heard anecdotally that some businesses are not fully aware of their domain registrars or who is responsible for managing and securing the domains within their organization. This lapse in knowledge creates more than siloed work environments; it can cause major repercussions when it comes to secure domain management and NIS2 compliance. Taking a more consistent, consolidated approach to managing and securing domains helps strengthen an organization’s overall domain security and checks one more task off the team’s compliance checklist.   3. Stay security-minded, organization-wide.  With new NIS2 requirements, businesses must report cybersecurity incidents within 24 hours. This demand requires an organization-wide culture shift to a more security-minded approach to the way they do business. For example, businesses may need to evaluate what cybersecurity protocols they have in place to secure the way they interact with their customers and their supply chain. Without security being top-of-mind, businesses may miss NIS2 requirements that could lead to revenue loss, loss of customers and even dents in their reputation. This shift doesn’t happen overnight but working with partners that are security-minded helps organizations stay a step ahead in their security.  As cybercriminals become more elusive in targeting reputable organizations, and as global geopolitical tensions leave many companies in the crossfires of nation-state attacks, adhering to NIS2 standards becomes all the more critical. These three strategies are guiding principles for organizations to contribute to a safer, more secure enterprise environment in Europe and around the world.   source

3 Strategies For a Seamless EU NIS2 Implementation Read More »

Microsoft commits to AI integration, but delivers no particulars to differentiate from rivals

“More so than any previous platform shift, every layer of the application stack will be impacted. It’s akin to GUI, internet servers, and cloud-native databases all being introduced into the app stack simultaneously. Thirty years of change is being compressed into three years,” Nadella said. “This is leading to a new AI-first app stack — one with new UI/UX [user interface/user experience] patterns, runtimes to build with agents, orchestrate multiple agents, and a reimagined management and observability layer. In this world, Azure must become the infrastructure for AI, while we build our AI platform and developer tools — spanning Azure AI Foundry, GitHub, and VS Code — on top of it.” Info-Tech’s Brunet said part of the challenge with Microsoft is that they offer so many different options, many overlapping, that “it can feel like a very fragmented offering that can be very confusing. They are trying to make their infrastructure and offerings feel less fragmented.” He said that he sees this as Microsoft’s way of leveraging the Azure cloud “to make it easier to stitch their pieces together.” source

Microsoft commits to AI integration, but delivers no particulars to differentiate from rivals Read More »