3 Firms Build HIG's CA$1.3B Take-Private Of Converge

By Jade Martinez-Pogue ( February 7, 2025, 1:15 PM EST) — Converge Technology Solutions Corp. on Friday announced that it has agreed to go private and be bought by private equity shop H.I.G. Capital in a deal that has an enterprise value of CA$1.3 billion ($909.6 million) and was built by three law firms…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

3 Firms Build HIG's CA$1.3B Take-Private Of Converge Read More »

Why firewalls and VPNs give you a false sense of security

In the ever-changing landscape of cybersecurity threats, traditional pillars like firewalls and VPNs are struggling to keep pace with the evolving challenges. Organizations are facing an upsurge in security breaches and vulnerabilities that surpass the capabilities of these longstanding security measures. The shift from on-premises work environments to more remote and digital setups has forced industries to rethink their security strategies. Once hailed as the foundation of security, firewalls and VPNs now find themselves outdated and inadequate. While they once provided a level of security, these tools now reveal vulnerabilities that can leave companies exposed to risks, particularly as they embrace digital transformations. In 2025, innovation in generative AI, automation, and IoT/OT technologies is poised to push boundaries across various industries. This progress, while groundbreaking, also presents new challenges. It enables attackers to automate phishing campaigns, create evasive malware, expedite threat development through AI, and offer Ransomware-as-a-Service (RaaS). With the increasing concerns surrounding cybersecurity breaches, the focus has shifted towards the potential vulnerabilities in VPNs that could grant attackers unauthorized access. A recent Cybersecurity Insider survey uncovered that 56% of organizations have been targets of cyberattacks exploiting VPN security vulnerabilities in the last year. Moreover, a staggering 91% of respondents express concerns about VPNs leading to a compromising breach. Even with strong firewalls in place, major organizations remain vulnerable to breaches. Delve deeper into the reasons why firewalls and VPNs might not be providing sufficient protection. A thinner sheet of protection across a larger attack surface VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. Perimeter-based architecture means more work for IT teams More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams. The bigger the network, the more operational complexity and time required. VPNs and firewalls can’t effectively guard against today’s threat landscape VPNs and firewalls deployed to protect and defend network access behave a lot like a security guard who sits at the front of a store in order to stop theft. Security Guards Firewalls and VPNs Stationed at the front door of a valuable store – tasked with identifying and stopping attacks. Can’t monitor all entrances at the same time. Deployed at key access points to an organization’s network. Can’t stop all the threats across every access point. Once an attacker gets in, they get access to the entire store. Permit lateral threat movement by placing users and entities onto the network. 1:few threat detection can’t scale unless you hire a lot of security guards to monitor all entrances. Can’t inspect encrypted traffic and enforce real-time security policies at scale. Can be slow, tired, expensive to hire, late for their shift and present a number of other issues that allow threats to go undetected and unanswered. Suffer from a variety of other challenges related to cost, complexity, operational inefficiency, poor user experiences, organizational rigidity, and more. Much like a lone security guard, VPNs and firewalls can help mitigate some risks, but they can’t keep up with the scale and complexity of the cybercrime of today. Your network is extending exponentially as you digitally transform your organization. With constant attacks on the horizon and a thinner cover of protection, how many million security guards can you hire? The Zero Trust Exchange delivers on the promise of security Unlike network-centric technologies like VPNs – zero trust architecture minimizes your attack surface and connects users to the apps they need directly—without putting anyone or anything on the network as a whole. Zscaler delivers zero trust with its cloud native platform: the Zscaler Zero Trust Exchange. The Zero Trust Exchange starts with the premise that no user, workload, or device is inherently trusted. The platform brokers a secure connection between a user, workload, or device and an application—over any network, from anywhere by looking at identity, app policies, and risk. As threats grow more dangerous, we can’t rely on a single security guard to keep everybody out anymore. VPNs and firewalls were designed to make organizations feel secure, but with all the evolving threats of today highlighting the cracks in these technologies, IT and security teams are left with a false sense of security. Truly secure digital transformation can only be delivered by implementing a zero trust architecture. The Zscaler Zero Trust Exchange is the comprehensive cloud platform designed to keep your users, workloads, IoT/OT, and B2B traffic safe in an environment where VPNs and firewalls can’t. If you’d like to learn more, this webinar serves as an introduction to zero trust and provides entry-level information about the topic. Or, if you’d like to go a level deeper, consider registering for an interactive whiteboard workshop for free. source

Why firewalls and VPNs give you a false sense of security Read More »

MegaBox x mofusand 新春呈獻17呎高mofusand打卡位吸引人流

(圖右2) MegaBox高級總監吳鎧廷表示,新春期間人流及消費較平日增加近一成,較去年新春期間按年持平。 各商場在新春期間設有主題裝置以吸引人流,由嘉里建設(00683)發展的九龍灣MegaBox,今個新年,與日本人氣療癒系貓咪 mofusand 攜手合作,即日起至2 月 16 日,在5 樓中庭推出「MegaBox x mofusand 福貓花見駅」大型裝置。全港首個 17 呎高的巨型千萬両 mofusand 特設置於 MegaBox 5 樓戶外,為市民帶來前所未有的震撼體驗! 新春人流及消費增一成按年持平 而MegaBox 高級總監吳鎧廷先生表示,新春期間人流及消費比平日增加近一成,較去年新春期間按年持平。吳鎧廷指,MegaBox 為全港最大家居總匯,家品商戶面積達 38 萬平方呎,集購物、飲食、娛樂商戶於一身,料可吸引啟德跑道區住宅入夥後的大量人流和購買傢俬的需求。 他透露,商場成功拓展寵物市場,主人可攜帶寵物到商場購物及娛樂,現時已有約接近七成商戶接受顧客攜帶寵物,為增加商場節慶氣氛。另外,MegaBox 於9樓舉辦首次「新春開運花市」,引入售賣賀年鮮花的限間限定店,讓顧客可以體驗一站式購買賀年鮮花、辦年貨或增置家居佈置的便利。 MegaBox 邀請了 mofusand 貓咪來港,在 5 樓中庭設置多個喜慶打卡位,打造至萌的場景。當中包 括三大互動祈願裝置包括「福貓金幣許願池」及「DIY 幸福御守」,沿途亦可以到「福貓號列車」欣 賞花海美景的,感受至「灜」新年氣氛。場內亦有多個打卡位,包括有 18 呎高「福貓花見車站」、 17 呎高「巨型千萬両 mofusand」及「花漾櫻の小橋」等,由 mofusand 為大家護航,迎接幸福美滿 的蛇年。 LinkedIn Email Facebook Twitter WhatsApp source

MegaBox x mofusand 新春呈獻17呎高mofusand打卡位吸引人流 Read More »

Best No ChexSystems Banks for Business Accounts

A negative ChexSystems report can make getting approved for a business account challenging and it may take up to five years for the record to be removed. While you’re working to resolve issues on your ChexSystems report, you can apply for a business account with one of the providers on our list of the best business accounts that do not use ChexSystems. Best overall business account with no ChexSystems: Bluevine Provider is a fintech platform, not a bank. It provides FDIC insurance and deposit services through a partnership with Coastal Community Bank. Best for full-service banking and branch accessibility: Wells Fargo Member FDIC Best for unlimited daily transactions: Capital One Member FDIC Best for robust startup services and high FDIC coverage: Mercury Mercury is a fintech company, not an FDIC-insured bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust®️; Members FDIC. Deposit insurance covers the failure of an insured bank. Best for speedy fund access and express payments: Novo Provider is a fintech platform, not a bank. It provides FDIC insurance and deposit services through a partnership with Middlesex Federal Savings. Best business accounts with no ChexSystems quick comparison The table below shows the top factors I considered when evaluating the five best business accounts with no ChexSystems. Bluevine: Best overall business account with no ChexSystems Our rating: 4.48 out of 5 Image: Bluevine Bluevine is a highly reputable fintech company that offers competitive high yields through a fee-free business checking account. Unlike most checking products, which typically do not earn an APY, Bluevine’s account stands out as an excellent choice. Additionally, they provide one of the best lines of credit available, featuring low interest rates. Bluevine is also among our top picks for the best international banks for businesses, thanks to its quick overseas payment options and transparent pricing. Small businesses can take advantage of its integrations with various financial platforms, which include QuickBooks, Xero, CashApp, Venmo, and Square. Why I chose it Bluevine Standard is my best overall business account with no ChexSystems because it has no monthly maintenance fee, initial deposit, or minimum balance requirement. Additionally, users can earn an APY of 1.5% Applies to qualifying balances of up to $250,000. if they meet one of these two conditions: Make a monthly purchase of at least $500 using a Bluevine debit card. Receive a monthly payment of $2,500 into the checking account. Account holders can also enjoy unlimited transactions, free standard ACH services, and no fees for incoming domestic wire transfers. As your business grows, I recommend shifting to Bluevine’s higher-tier plans, which offer even higher APY rates and reduced standard payment fees. Take advantage of Bluevine’s free one-month trial for an upgraded plan. With this trial, you can earn higher APY rates without having to meet any conditions. Enjoy discounted bill payments, and you’ll have the option to switch back to the free plan once the trial period ends. Monthly fees Bluevine Standard: $0 Bluevine Plus: $30; waivable by having: An ADB of $20,000 across your Bluevine checking account, including subaccounts. A spend of $2,000 monthly using your Bluevine debit or credit card. Bluevine Premier: $95; waivable by meeting: An ADB of $100,000 across your Bluevine checking account, including subaccounts. A spend of $5,000 monthly using your Bluevine debit or credit card. Features Five subaccounts with unique account numbers. Free ACH and incoming domestic wire transfers. Free ATM access at over 37,000 MoneyPass locations. International payments to 32 countries in 15 currencies. QuickBooks, Xero, and Wave integrations. Wise, Venmo, CashApp, and Square compatibility. FDIC insurance of up to $3 million. Lines of credit of up to $250,000 at low rates. Pros and cons Pros Cons No monthly maintenance fees or balance minimums for Bluevine Standard. $2.50 for non-network ATM use, plus third-party operator fees. APY rate of 1.5% for a free plan. No ATM fee refunds. Fast account application approval. No savings products. Wells Fargo: Best for full-service banking and branch accessibility Our rating: 4.37 out of 5 Image: Wells Fargo Wells Fargo is a well-recognized institution that offers full-service banking. This includes three business checking accounts with fees that can be waived. The lowest tier, Initiate Business Checking, requires an initial deposit of $25 and includes 100 free transactions along with up to $5,000 in free cash deposits. The monthly fee of $10 can be waived by maintaining a daily balance of $500 or an average ledger balance of $1,000. With a large branch network in Washington, D.C., and 36 states Wells Fargo has no branches in Hawaii, Indiana, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Missouri, New Hampshire, Ohio, Oklahoma, Rhode Island, Vermont and West Virginia. , you can easily access in-person banking services at Wells Fargo. The bank also offers analyzed, IOLTA Interest on Lawyers Trust Account , RETA Real Estate Trust Account accounts, savings products, certificates of deposit (CDs), credit cards, SBA Small Business Administration  loans, healthcare practice financing, and lines of credit. Why I chose it If you’re looking for a business bank account with no ChexSystems, Wells Fargo is a solid pick because they have many physical locations, which is convenient for small businesses that require in-person banking. They allow a high cash deposit limit of $5,000 for cash-reliant small businesses, and you can waive the monthly fee by maintaining a minimum balance of $500. I appreciate the wide range of business products available at Wells Fargo, making it easy to apply for savings accounts, financing, and credit cards all in one reputable institution. Additionally, the bank offers personal second-chance accounts to help you rebuild your credit. Monthly fees Initiate Business Checking: $10; waivable by meeting a daily balance of $500 or an average ledger balance of $1,000. Navigate Business Checking: $25; waivable by meeting a daily balance of $10,000 or average combined business deposit balances of $15,000. Optimize Business Checking: $75 can be reduced or offset by an earning allowance. Features Waivable monthly fees. Free incoming wire transfers for higher-tier accounts. Free access to more than

Best No ChexSystems Banks for Business Accounts Read More »

The Cost of AI: Power Hunger — Why the Grid Can’t Support AI

Remember when plans to use geothermal energy from volcanoes to power bitcoin mining turned heads as examples of skyrocketing, tech-driven power consumption? If it possessed feelings, AI would probably say that was cute as it gazes hungrily at the power grid. InformationWeek’s “The Cost of AI” series previously explored how energy bills might rise with demand from artificial intelligence, but what happens if the grid cannot meet escalating needs? Would regions be forced to ration power with rolling blackouts? Will companies have to “wait their turn” for access to AI and the power needed to drive it? Will more sources of power go online fast enough to absorb demand? Answers to those questions might not be as simple as adding windmills, solar panels, and more nuclear reactors to the grid. Experts from KX, GlobalFoundries, and Infosys shared some of their perspectives on AI’s energy demands and the power grid’s struggle to accommodate this escalation. “I think the most interesting benchmark to talk about is the Stargate [project] that was just announced,” says Thomas Barber, vice president, communications infrastructure and data center at GlobalFoundries. The multiyear Stargate effort, announced late January, is a $500 billion plan to build AI infrastructure for OpenAI with data centers in the United States. “You’re talking about building upwards of 50 to 100 gigawatts of new IT capacity every year for the next seven to eight years, and that’s really just one company.” Related:How Will International Politics Complicate US Access to AI? That is in addition to Microsoft and Google developing their own data center buildouts, he says. “The scale of that, if you think about it, is the Hoover Dam generates two gigawatts per year. You need 50 new Hoover Dams per year to do it.” The Stargate site planned for Abilene, Texas would include power from green energy sources, Barber says. “It’s wind and solar power in West Texas that’s being used to supply power for that.” Business Insider reported that developers also “filed permits to operate natural gas turbines at Stargate’s site in Abilene.” Barber says as power gets allocated to data centers, in a broad sense, some efforts to go green are being applied. “It depends on whether or not you consider nuclear green,” he says. “Nuclear is one option, which is not carbon-centric. There’s a lot of work going into colocated data centers in areas where solar is available, where wind is available.” Barber says very few exponentials, such as Moore’s Law on microchips, last, but AI is now on the “upslope of the performance curve of these models.” Even as AI gets tested against more difficult problems, these are still the early training days in the technology’s development. Related:How CIOs Can Prepare for Generative AI in Network Operations When AI moves from training and more into inference — where AI draws conclusions — Barber says demand could be significantly greater, maybe even 10 times so, than with training data. “Right now, the slope is driven by training,” he says. “As these models roll out, as people start adopting them, the demand for inference is going to pick up and the capacity is going to go into serving inference.” A Nuclear Scale Matter The world already sees very hungry AI models, says Neil Kanungo, vice president of product led growth for KX, and that demand is expected to rise. According to research released in May by the Electric Power Research Institute (EPRI), data centers currently account for about 4 percent of electricity use in the United States, and project that number could rise as high as 9.1% by 2030. While AI training drives high power consumption, Kanungo says the ubiquity of AI inference makes its draw on power is significant as well. One way to improve efficiency, he says, would be to remove the transmission side of power from the equation by placing data centers closer to power plants. “You get huge efficiency gains by cutting inefficiency out, where you’re having over 30% losses traditionally in power generation,” Kanungo says. He is also a proponent of the use of nuclear power, considering its energy load and land usage impact. “The ability to put these data centers near nuclear power plants and what you’re transmitting out is not power,” he says. “You’re transmitting data out. You’re not having losses on data transmission.” Related:It Takes a Village: New Infrastructure Costs for AI — Utility Bills Nuclear power development in the United States, he says, has seen some stalling due to negative perspectives on safety and potential environmental concerns. Rising energy demands might be a catalyst to revisit such conversations. “This might be the right time to switch those perceptions,” Kanungo says, “because you have tech giants that are willing to take the risks and handle the waste, and go through the red tape, and make this a profitable endeavor.” He believes these are still the very early stages of AI adoption and as more agents are used with LLMs — with agents completing tasks such as shopping for users, filling out tabular data, or deep research — more computation is needed. “We’re just at the tip of the iceberg of agents,” Kanungo says. “The use cases for these transformer-based LLMs are so great, I think the demand for them is going to continue to go up and therefore we should be investing power to ensure that you’re not jeopardizing residential power … you’re not having blackouts, you’re not stealing base load.” Energy Hungry GPUs There is an unprecedented load being put on the grid according, to Ashiss Kumar Dash, executive vice president and global head – services, utilities, resources, energy and sustainability for Infosys. He says the power conundrum as it relates to AI is three-pronged. “The increase in demand for electricity, increase in demand for energy is unprecedented,” Dash says. “No other general-purpose technology has put this much demand in the past … they say a ChatGPT query consumes 10 times the energy that a Google search would.” (According to research

The Cost of AI: Power Hunger — Why the Grid Can’t Support AI Read More »

Will the end of Windows 10 accelerate CIO interest in AI PCs?

Mainelli acknowledges that most of the practical use cases on AI PCs will be many of the same things people use AI in the cloud for today — content creation, content editing, text summarization, language translation, automation of repetitive tasks, prototyping, personalization, predictive insights, and virtual assistants — but  they will run locally on the device making it faster, cheaper, more private, and more secure. Allocating some AI workloads to PCs offers CIOs other benefits, he says, noting that Microsoft will continue to make its Copilot+ applications available in the cloud. “The vision around AI PCs is that, over time, more of the models, starting with small language models, and then quantized large language models … more of those workloads will happen locally, faster, with lower latency, and you won’t need to be connected to the internet and it should be less expensive,” the IDC analyst adds. “You’ll pay a bit more for an AI PC but [the AI workload is] not on the cloud and then arguably there’s more profit and it’s more secure.” source

Will the end of Windows 10 accelerate CIO interest in AI PCs? Read More »

Gartner: Samsung was the Top Semiconductor Vendor Globally in 2024

Samsung regained the top of the leaderboard of semiconductor companies by revenue, Gartner said on Feb. 3 in its yearly worldwide semiconductor vendor statistics report. Global semiconductor revenue hit $626 billion, up 18.1% from 2023. “The rising demand for AI and generative AI (GenAI) workloads led data centers to become the second-largest market for semiconductors in 2024, behind smartphones,” said George Brocklehurst, vice president analyst at Gartner, in a press release. Samsung takes the top of the semiconductor leaderboard Samsung Electronics sat at the top of Gartner’s semiconductor leaderboard with $66.5 billion in revenue, rounded. The top five semiconductor vendors were: Samsung ($66.5 billion in revenue). Intel ($49.2 billion in revenue). NVIDIA ($46 billion in revenue). SK hynix ($42.8 billion in revenue). Qualcomm ($32.4 billion in revenue). A rebound in memory device prices helped Samsung take the top spot from Intel. NVIDIA moved up two spots to number five due to its success in the AI market. Despite Intel’s AI PC business and the Core Ultra chipset, the company’s semiconductor revenue growth stayed flat at 0.1% growth in 2024; Gartner said Intel’s AI accelerator and x86 business offset its successes. Gartner said NVIDIA’s strong AI business is the rocket engine behind its climb up the list. SEE: The success of Chinese generative AI DeepSeek is divisive among Australian political and industry groups.  Data centers, CPUs, and AI processors drive the increase “Graphics processing units (GPUs) and AI processors used in data center applications (servers and accelerator cards) were the key drivers for the chip sector in 2024,” Brocklehurst said in the press release. Data center semiconductor revenue alone was $112 billion in 2024, up from $64.8 billion in 2023. More about Innovation Memory is also hot, with 71.8% revenue growth among memory products in 2024. High-bandwidth memory (HBM) production had a large impact, responsible for a significant portion of revenue for DRAM vendors; DRAM revenue overall grew by 75.4% in 2024. Nonmemory revenue, which accounts for most (74.8%) semiconductor revenue, grew 6.9% in 2024. Semiconductor revenue predicted for 2025 As of February 2024, Gartner said the semiconductor sector will continue to see rising revenue, with $705 billion of revenue predicted in 2025. “Memory and AI semiconductors will drive near-term growth,” Brocklehurst said. Specifically, HBM revenue will increase and make up a larger share of the total DRAM revenue. source

Gartner: Samsung was the Top Semiconductor Vendor Globally in 2024 Read More »

Google drops AI weapons ban—what it means for the future of artificial intelligence

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google has removed its long-standing prohibition against using AI for weapons and surveillance systems, marking a significant shift in the company’s ethical stance on AI development that former employees and industry experts say could reshape how Silicon Valley approaches AI safety. The change, quietly implemented this week, eliminates key portions of Google’s AI Principles that explicitly banned the company from developing AI for weapons or surveillance. These principles, established in 2018, had served as an industry benchmark for responsible AI development. “The last bastion is gone,” said Tracy Pizzo Frey, who spent five years implementing Google’s original AI principles as senior director of outbound product management, engagements and responsible AI at Google Cloud, wrote in a BlueSky post. “It’s no holds barred. Google really stood alone in this level of clarity about its commitments for what it would build.” The revised principles remove four specific prohibitions: technologies likely to cause overall harm; weapons applications; surveillance systems; and technologies that violate international law and human rights. Instead, Google now says it will “mitigate unintended or harmful outcomes” and align with “widely accepted principles of international law and human rights.” (Credit: BlueSky / Tracy Pizzo Frey) Google loosens AI ethics: What this means for military and surveillance tech This shift comes at a particularly sensitive moment, as AI capabilities advance rapidly and debates intensify about appropriate guardrails for the technology. The timing has raised questions about Google’s motivations, although the company maintains these changes have been long in development. “We’re in a state where there’s not much trust in big tech, and every move that even appears to remove guardrails creates more distrust,” Pizzo Frey said in an interview with VentureBeat. She emphasized that clear ethical boundaries had been crucial for building trustworthy AI systems during her tenure at Google. The original principles emerged in 2018 amid employee protests over Project Maven, a Pentagon contract involving AI for drone footage analysis. While Google eventually declined to renew that contract, the new changes could signal openness to similar military partnerships. The revision maintains some elements of Google’s previous ethical framework, but shifts from prohibiting specific applications to emphasizing risk management. This approach aligns more closely with industry standards like the NIST AI Risk Management Framework, although critics argue it provides less concrete restrictions on potentially harmful applications. “Even if the rigor is not the same, ethical considerations are no less important to creating good AI,” Pizzo Frey noted, highlighting how ethical considerations improve AI products’ effectiveness and accessibility. From Project Maven to policy shift: The road to Google’s AI ethics overhaul Industry observers say this policy change could influence how other technology companies approach AI ethics. Google’s original principles had set a precedent for corporate self-regulation in AI development, with many enterprises looking to Google for guidance on responsible AI implementation. The modification reflects broader tensions in the tech industry between rapid innovation and ethical constraints. As competition in AI development intensifies, companies face pressure to balance responsible development with market demands. “I worry about how fast things are getting out there into the world, and if more and more guardrails are removed,” said Pizzo Frey, expressing concern about the competitive pressure to release AI products quickly without sufficient evaluation of potential consequences. Big tech’s ethical dilemma: Will Google’s AI policy shift set a new industry standard? The revision also raises questions about internal decision-making processes at Google and how employees might navigate ethical considerations without explicit prohibitions. During her time at Google, Pizzo Frey established review processes that brought together diverse perspectives to evaluate AI applications’ potential impacts. While Google maintains its commitment to responsible AI development, the removal of specific prohibitions marks a significant departure from its previous leadership role in establishing clear ethical boundaries for AI applications. As AI continues to advance, the industry is watching to see how this shift might influence the broader landscape of AI development and regulation. source

Google drops AI weapons ban—what it means for the future of artificial intelligence Read More »

The evolution of IT optimization is AIOps

It could be said that the beginning of the IT optimization movement started with monitoring. The idea behind IT monitoring is that it determines how IT infrastructure and its underlying components perform in real time in order to make data-driven decisions for resource provisioning, IT security, or to evaluate usage trends. But monitoring was just the beginning. Observability was the next phase. And while it’s been around for about 10 years, in the last few years it’s really started to gain traction — especially with business imperatives like moving to the cloud and supporting remote workers. Unfortunately, for many large enterprises, a lot of the efforts to optimize IT systems using observability data has led to tool sprawl and more work for IT professionals to manage the tools versus the value they bring to the business. It is not uncommon for large enterprises to have 15 to 20 observability tools. That produces too many signals for IT to sift through, which overwhelms IT teams. It’s difficult to make sense of all of these signals — especially when dealing with major incidents. For IT leaders, applying the appropriate technology to the task at hand becomes imperative when dealing with alert deluge, complexity, rapid changes, and fast-paced innovation. AIOps can help you deliver IT optimization with the bonus of automation The right artificial intelligence for IT operations (AIOps) solutions in your environment can help make sense of the mountain of data coming in from observability tools, and even automate issue remediations at the service level. This helps make sure all of your business services are optimized, automated, and delivering for your customers, employees, and other end-users. AIOps and observability data work better together AIOps uses AI and machine learning (ML) to automate IT Ops, from reconciling and analyzing data collected by various sources —including observability tools — to conducting root cause analysis and automated remediations. AIOps is a prescriptive and proactive means to direct IT teams to the source of problems with high confidence and context, ultimately reducing or eliminating the time spent troubleshooting an issue. Good AIOps platforms can take in volumes of data natively or from integrations with other tools, reconcile and normalize that data, and provide a unified view (east-west) across IT domains — proactively pointing IT teams to the source of problems and often preventing an incident from becoming a larger issue that impacts the business. AIOps focuses on automatic problem resolution and preventing emerging potential incidents from happening. AIOps provides more insights and actions than observability alone AIOps provides a real-time, action-oriented solution that drives business results. Good AIOps solutions simply go a step further than observability solutions by: Reconciling ingested data and providing a unified view (east-west) across disparate tools and domains. Conversely, observability tools have been used to explore data after a problem occurs and within the observability domain (north-south), often isolated from other observability domains. Automating problem resolution and preventing incidents from happening versus observability tools, which only enable data exploration. Reducing noise and performing root cause analysis versus observability data, which is used for interactive exploration. Focusing on automation and intelligent remediation using AI/ML versus observability, which focuses on data collection and investigation. Using predictive algorithms to optimize service assurance versus observability, which uses capacity planning purposes in semi-automated ways. Providing best action recommendations based on the past and in real-time, ML-driven insights versus observability, which provides explorative iteration. How AIOps delivers value for IT organizations Enterprise IT organizations today are already seeing the gains of applying AIOps across their environments using BMC solutions. BMC’s AIOps is powered by its composite AI, including causal, predictive, and generative AI (BMC HelixGPT) solutions, which automate traditional incident analysis and offers a clear, plain-language summary of the problem — as well as information about how the same problems were solved in the past. Using composite AI, an AIOps solution can detect an anomaly, generate a summary of the incident, and suggest a best action recommendation (BAR). Automated incident resolution, with AI and generative AI (genAI) functionality, prevents downtime and allows IT to perform health checks preemptively, improving overall system reliability and resilience. AIOps can also accelerate troubleshooting workflows by providing predefined prompts to answer questions that lead to better understanding of complex systems, and ultimately, faster resolution. Using a solution such as Ask BMC HelixGPT speeds up the process and results in quicker resolutions. GenAI functionality in AIOps solutions such as BMC Helix helps IT teams confidently conduct changes, mitigating the risk that a change will negatively impact the environment. Our AIOps approach, coupled with ServiceOps, enables flexible-change risk management and automated or hybrid change governance. AIOps can also use its knowledge of historical usage patterns and business trends to accurately predict future resource demands. This helps prevent outages and optimizes operations by allowing enterprise IT to run what-if scenarios to right-size capacities for user demands. In this scenario, AIOps helps organizations proactively plan for capacity, ensuring both performance and cost efficiency. Are you ready to achieve real business value with AIOps? AIOps solutions can create a core competitive advantage for the entire organization, with BMC customers having achieved results like: 100% uptime for their business services 100% visibility across their IT environment More than 70% reduction in incident volume $1 million in infrastructure cost savings $2.3 million in reduced tool-sprawl savings Productivity savings from freeing the time of up to 96 full time-employees Start driving business outcomes with AIOps. Click here to learn more about BMC AIOps solutions and how we can help you transform your IT landscape. To schedule a consultation with BMC to start transforming your IT organization, click here. source

The evolution of IT optimization is AIOps Read More »