Intersec 2025: Middle East CISOs focus on AI

The growing threat posed by Agentic AI and disinformation has become one of the most critical challenges in the cybersecurity landscape. At the inaugural CISO Business Briefing, held as part of Intersec 2025, cybersecurity experts and industry leaders gathered to explore the implications of these emerging risks and strategies to mitigate them. The event, hosted at the Dubai World Trade Centre, highlighted the urgent need for organizations to adapt their cybersecurity measures to address the evolving digital threats. In his keynote address, H.E Dr. Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE government, set the stage for a series of thought-provoking discussions centered around the latest developments in cybersecurity. The event’s opening session, led by Faheem Siddiqui, Director of Information Security at Majid Al Futtaim Holding, zeroed in on one of the most pressing issues: “Mitigating Risk from AI Agents, Disinformation Security.” Siddiqui shed light on the transformative dangers of disinformation campaigns and the growing capabilities of AI-driven systems. These “Agentic AI” systems are self-directed and capable of generating highly sophisticated disinformation, including deepfakes, fake news, and brand impersonations. Such threats, he warned, could have devastating consequences for businesses, governments, and societies alike. source

Intersec 2025: Middle East CISOs focus on AI Read More »

10 top priorities for CIOs in 2025

9. Commit to innovation In 2025, a CIO’s top priority should be leveraging emerging technologies — particularly AI — to drive organizational agility, innovation, and resilience, advises Tim Barnett, CIO at payments and data security firm Bluefin. The pace of change in the global market and technology landscape demands organizations that can adapt quickly. “Agility and innovation are no longer competitive advantages — they’re necessities,” Barnett states. AI, in particular, is poised to redefine how businesses operate, compete, and grow. “On the positive side, AI is what the military would call a force multiplier,” Barnett says. “It amplifies capabilities across the board, enabling automation of complex processes, deeper insights from massive datasets, and real-time decision-making at a scale and speed previously unimaginable,” he explains. “However, increased reliance on AI also necessitates modernized cybersecurity and resilience strategies, as these systems become vital — and attractive — targets for sophisticated threats, including AI-enabled phishing and other advanced attack vectors.” source

10 top priorities for CIOs in 2025 Read More »

AI Risk Management: Is There an Easy Way?

When ChatGPT commercially launched in 2022, governments, industry sectors, regulators and consumer advocacy groups began to discuss the need to regulate AI, as well as to use it, and it is likely that new regulatory requirements will emerge for AI in the coming months.   The quandary for CIOs is that no one really knows what these new requirements will be. However, two things are clear: It makes sense to do some of your own thinking about what your company’s internal guardrails should be for AI; and there is too much at stake for organizations to ignore thinking about AI risk.   The annals of AI deployments are rife with examples of AI gone wrong, resulting in damage to corporate images and revenues. No CIO wants to be on the receiving end of such a gaffe.  That’s why PWC says, “Businesses should also ask specific questions about what data will be used to design a particular piece of technology, what data the tech will consume, how it will be maintained and what impact this technology will have on others … It is important to consider not just the users, but also anyone else who could potentially be impacted by the technology. Can we determine how individuals, communities and environments might be negatively affected? What metrics can be tracked?”    Related:How Can CIOs Prepare for AI Data Regulation Changes? Identify a ‘Short List’ of AI Risks   As AI grows and individuals and organizations of all stripes begin using it, new risks will develop, but these are the current AI risks that companies should consider as they embark on AI development and deployment:   Un-vetted data. Companies aren’t likely to obtain all of the data for their AI projects from internal sources. They will need to source data from third parties.   A molecular design research team in Europe used AI to scan and digest all of the worldwide information available from sources such as research papers, articles, and experiments on that molecule. A healthcare institution wanted to use an AI system for cancer diagnosis, so it went out to procure data on a wide range of patients from many different countries.   In both cases, data needed to be vetted.   In the first case, the research team narrowed the lens of the data it was choosing to admit into its molecular data repository, opting to use only information that directly referred to the molecule they were studying. In the second case, the healthcare institution made sure that any data it procured from third parties was properly anonymized so that the privacy of individual patients was protected.   By properly vetting internal and external data that AI would be using, both organizations significantly reduced the risk of admitting bad data into their AI data repositories.   Related:AI’s Two Faces: Unlock Innovation but Manage Shadow AI Imperfect algorithms. Humans are imperfect, and so are the products they produce. The faulty Amazon recruitment tool, powered by AI and outputting results that favored males over females in recruitment efforts, is an oft-cited example — but it’s not the only one.   Imperfect algorithms pose risks because they tend to produce imperfect results that can lead businesses down the wrong strategic paths. That’s why it’s imperative to have a diverse AI team working on algorithm and query development. This staff diversity should be defined by a diverse set of business areas (along with IT and data scientists) working on the algorithmic premises that will drive the data. An equal amount of diversity should be used as it applies to the demographics of age, gender and ethnic background. To the degree that a full range of diverse perspectives are incorporated into algorithmic development and data collection, organizations lower their risk, because fewer stones are left unturned.    Poor user and business process training. AI system users, as well as AI data and algorithms, should be vetted during AI development and deployment. For example, a radiologist or a cancer specialist might have the chops to use an AI system designed specifically for cancer diagnosis, but a podiatrist might not.   Related:In the Era of Generative AI, Establish a ‘Risk Mindset’ Equally important is ensuring that users of a new AI system understand where and how the system is to be used in their daily business processes. For instance, a loan underwriter in a bank might take a loan application, interview the applicant, and make an initial determination as to the kind of loan the applicant could qualify for, but the next step might be to run the application through an AI-powered loan decisioning system to see if the system agrees. If there is disagreement, the next step might be to take the application to the lending manager for review.   The keys here, from both the AI development and deployment perspectives, are that the AI system must be easy to use, and that the users know how and when to use it.   Accuracy over time. AI systems are initially developed and tested until they acquire a degree of accuracy that meets or exceeds the accuracy of subject matter experts (SMEs). The gold standard for AI system accuracy is that the system is 95% accurate when compared against the conclusions of SMEs. However, over time, business conditions can change, or the machine learning that the system does on its own might begin to produce results that yield reduced levels of accuracy when compared to what is transpiring in the real world. Inaccuracy creates risk.   The solution is to establish a metric for accuracy (e.g., 95%), and to measure this metric on a regular basis.  As soon as AI results begin losing accuracy, data and algorithms should be reviewed, tuned and tested until accuracy is restored.   Intellectual property risk. Earlier, we discussed how AI users should be vetted for their skill levels and job needs before using an AI system. An additional level of vetting should be applied to those individuals who use the company’s AI to develop proprietary intellectual property for the company.   If you are an aerospace company, you don’t want your

AI Risk Management: Is There an Easy Way? Read More »

AI comes alive: From bartenders to surgical aides to puppies, tomorrow’s robots are on their way

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Humanoid robots are no longer the stuff of science fiction. Imagine a world where robots not only collaborate with us in factories but also greet us in stores, aid in surgeries and care for our loved ones. With Tesla planning to deploy thousands of Optimus robots by 2026, the age of humanoid robots is closer than we think.  This vision is becoming increasingly tangible as more companies showcase groundbreaking innovations. The 2025 Consumer Electronics Show (CES) showcased several examples of how robotics is advancing in both functionality and human-centric design. These included ADAM the robot bartender from Richtech Robotics, which mixes more than 50 types of drinks and interacts with customers, and Tombot Inc.’s puppy dogs that wag their tails and make sounds designed to comfort older adults with dementia. While there may be a market for these and other robots on display at the show, it is still early days for broad deployment of this type of robotic technology. Nevertheless, real technological progress is being made in the field. Increasingly, this includes “humanoid” robots that use generative AI to create more human-like abilities — enabling robots to learn, sense and act in complex environments. From Optimus by Tesla to Aria from Realbotix, the next decade will see a proliferation of humanoid robots.  A conversation with “Aria.” Source: CNET https://youtu.be/2HQ84TVcbMw Despite these promising advancements, some experts caution that achieving fully human-like capabilities is still a distant goal. Citing shortcomings in current technology, Yann LeCun — one of the “Godfathers of AI” — argued recently that AI systems do not “have the capacity to plan, reason … or understand the physical world.” He added that we cannot build smart enough robots today because “we can’t get them to be smart enough.” LeCun might be correct, although that doesn’t mean we will not soon see more humanoid robots. Elon Musk recently said that Tesla will produce several thousand Optimus units in 2025 and that he expects to ship 50,000 to 100,000 of them in 2026. That is a dramatic increase from the handful that exist today performing circumscribed functions. Of course, Musk has been known to get his timelines wrong, such as when he said in 2016 that fully autonomous driving would be achieved within two years.  Nevertheless, it seems clear that significant advances are being made with humanoid robots. Tesla is not alone in pursuing this goal, as other companies including Agility Robotics, Boston Dynamics and Figure AI are among the leaders in the humanoid robotic field.  Business Insider recently had a conversation with Agility Robotics CEO Peggy Johnson, who said it would soon be “very normal” for humanoid robots to become coworkers with humans across a variety of workplaces. Last month, Figure announced in a LinkedIn post: “We delivered F.02 humanoid robots to our commercial client, and they’re currently hard at work.” With significant backing from major investors including Microsoft and Nvidia, Figure will provide fierce competition for the humanoid robot market. Figure 02 humanoid robots at work in a BMW factory. Source: YouTube: https://youtu.be/WlUFoZstcWg Creating a world view LeCun did have a point, however, as more advances are required before robots have more complete human capabilities. It is simpler to move parts in a factory than to navigate dynamic, complex environments. The current generation of robots face three key challenges: processing visual information quickly enough to react in real-time; understanding the subtle cues in human behavior; and adapting to unexpected changes in their environment. Most humanoid robots today are dependent on cloud computing and the resulting network latency can make simple tasks like picking up an object difficult.  One company working to overcome current robotics limitations is startup World Labs, founded by “AI Godmother” Fei Fei Li. Speaking with Wired, Li said: “The physical world for computers is seen through cameras, and the computer brain behind the cameras. Turning that vision into reasoning, generation and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.”  Gen AI powers spatial intelligence by helping robots map their surroundings in real-time, much like humans do, predicting how objects might move or change. Such advancements are crucial for creating autonomous humanoid robots capable of navigating complex, real-world scenarios with the adaptability and decision-making skills needed for success. While spatial intelligence relies on real-time data to build mental maps of the environment, another approach is to help the humanoid robot infer the real world from a single still image. As explained in a pre-published paper, Generative World Explorer (GenEx) uses AI to create a detailed virtual world from a single image, mimicking how humans make inferences about  their surroundings. While still in the research phase, this capability will help robots to make split-second decisions or navigate new environments with limited sensor data. This would allow them to quickly understand and adapt to spaces they have never experienced before. The ChatGPT moment for robotics is coming While World Labs and GenEx push the boundaries of AI reasoning, Nvidia’s Cosmos and GR00T are addressing the challenges of equipping humanoid robots with real-world adaptability and interactive capabilities. Cosmos is a family of AI “world foundation models” that help robots understand physics and spatial relationships, while GR00T (Generalist Robot 00 Technology) allows robots to learn by watching humans — like how an apprentice learns from a master. Together, these technologies help robots understand both what to do and how to do it naturally. These innovations reflect a broader push in the robotics industry to equip humanoid robots with both cognitive and physical adaptability. GR00T could enable humanoid robots to help in healthcare by observing and mimicking medical professionals, while GenEx might allow robots to navigate disaster zones by inferring environments from limited visual input. As reported by Investor’s Business Daily, Nvidia CEO Jensen Huang said: “The ChatGPT moment for robotics is coming.”  Another company working to create physical AI

AI comes alive: From bartenders to surgical aides to puppies, tomorrow’s robots are on their way Read More »

L3Harris CEO Urges Musk, Ramaswamy To Limit Bid Protests

By Dorothy Atkins ( January 16, 2025, 9:54 PM EST) — L3Harris Technologies’ CEO published an open letter Wednesday to leaders of the new U.S. Department of Government Efficiency — billionaire Elon Musk and ex-presidential candidate Vivek Ramaswamy — calling on them to overhaul the defense contracting process and limit bid protests to three per year, per contractor, among other changes…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

L3Harris CEO Urges Musk, Ramaswamy To Limit Bid Protests Read More »

機管局向商界介紹機場城市發展藍圖新品牌 「SKYTOPIA」

香港機場管理局昨天舉辦大型活動向商界介紹機場城市發展藍圖新品牌「SKYTOPIA」。活動設有不同主題展覽區域,介紹多個重點項目,展現「SKYTOPIA」 的發展潛力及投資機遇,吸引了逾 300 位來自香港、內地及國際商貿代表出席。「SKYTOPIA」將匯聚商業活動、流行文化、藝術交易及娛樂消閑於一 身。轄下多個項目既能充分發揮香港獨有優勢、更可善用鄰近香港國際 機場的土地與海灣資源,重點包括:• 香港首個融合藝術創作、鑑賞及貿易於一身的一站式藝術廊總匯;• 全港首個專門為藝術品而設的藝術倉儲設施;• 全港最大型、提供逾 500 個遊艇泊位的機場海灣碼頭;• 全港最大規模的水上康樂消閑區;• 匯聚來自全球新鮮食品的空運鮮活市集;• 亞洲國際博覽館第二期發展,當中包括全港最大型、提供二萬個座位的室內多用途表演場地;• 糅合室內、外運動遊戲概念,並將冒險、探索、運動與娛樂融為一體的運動主題遊樂區;• 海岸度假村及豪華酒店;• 1.5 公里長的海灣長廊及露天廣場;• 環保及智能運輸系統,包括自動化停車場和無人駕駛車輛等。機管局主席林天福在活動上指出:「我們展望香港國際機場將不只是旅 客登機或進入香港的門戶。我們的願景是把『SKYTOPIA』建設成世界 頂級地標,吸引來自香港、繁盛的大灣區,以至亞洲主要市場和世界各 地的訪客。機管局的角色是興建主要基建,為一眾專家及投資者提供平 台,讓他們能悉心提供服務與產品。我們衷心感謝香港特區政府在政策 上給予大力支持,並樂見商界的正面反應。我們冀盼與投資者合作,促使『SKYTOPIA』成為驅動香港以至其他地方經濟增長的重要引擎。」「SKYTOPIA」蘊含是項發展的重要特性與願景,顧名思義,英文字 「SKY」(即天空)及「IA」 (即國際機場)體現天空與機場密不可 分的關係,通達世界,無遠弗屆;而「TOP」則承諾為訪客帶來頂級體 驗,凸顯領先世界的地標特色,是旅客夢寐以求的天空之城。 LinkedIn Email Facebook Twitter WhatsApp 非常媒體 Veri-Media 轉載請務必保留本文鏈接 source

機管局向商界介紹機場城市發展藍圖新品牌 「SKYTOPIA」 Read More »

Lenovo to acquire Infinidat to expand its storage folio

The company, which CEO Phil Bullinger currently leads, was founded by Moshe Yanai in 2011. It also has an office in Waltham, Massachusetts. Lenovo eyes high-end enterprise storage market The acquisition is part of Lenovo’s growth strategy to meet the evolving needs of modern data centers that are expected to handle AI and generative AI workloads, the company said, adding that Infinidat’s offering will find synergy with its Infrastructure Solutions Group and jointly will target the high-end enterprise storage market. Currently, Lenovo’s Infrastructure Solutions business operates in the entry and mid-range enterprise storage market offering a portfolio of options, such as flash and hybrid arrays, hyperconverged infrastructure (HCI), software-defined storage (SDS), and data management suites such as Lenovo TruScale. “This is a win-win for both companies. Lenovo fills a big void in its storage portfolio, while Infinidat is able to leverage a hardware design and manufacturing machine,” Matt Kimball, principal analyst at Moor Strategy and Insights, wrote on LinkedIn. Lenovo is expected to quickly train its sites on Infinidat’s storage software IP and look to where it can leverage this more broadly, Kimball explained, adding that “if Lenovo’s channels are properly leveraged, we can see real disruption in the enterprise storage market.” Early focus on the enterprise storage market According to analysts, Lenovo has been hyper-focused on the enterprise storage market since it acquired IBM’s x86 server business for about $2.3 billion in 2014. Another landmark deal for the company, targeted at competing more aggressively with Dell and HPE — the dominant players in the enterprise storage market, came in 2018 in the form of a partnership with NetApp, under which it also developed a joint venture in China to co-develop a new range of ThinkSystem Infrastructure that imbibes NetApp’s data management expertise. source

Lenovo to acquire Infinidat to expand its storage folio Read More »

2. Where men and women turn for emotional support and social connection

About three-quarters of U.S. adults (74%) say they would be extremely or very likely to turn to their spouse or partner if they needed emotional support. Men and women are equally likely to say they’d lean on their spouse or partner in this way. Mothers and friends are also frequent sources of support: 48% of adults point to their mother and 46% point to a friend as someone they’d be extremely or very likely to go to. Smaller shares would go to their father (28%) or to another family member (35%). There are significant gender differences when it comes to certain sources of support. By margins ranging from 12 to 18 percentage points, greater shares of women than men say they’d be extremely or very likely to turn to: Their mother (54% of women vs. 42% of men) A friend (54% vs. 38%) Another family member who is not their parent, spouse or partner (44% vs. 26%) Turning to mental health professionals and online communities Americans are less likely to say they’d turn to a mental health professional for emotional support than to say they’d turn to family or friends. About one-in-five adults (19%) say they’d be extremely or very likely to turn to a mental health professional for this type of support. Some demographic groups are more likely than others to say they’d be extremely or very likely to seek out a mental health professional: Women are more likely to say this than men (22% vs. 16%). Black (26%) and Hispanic (25%) adults are more likely to say this than White (16%) and Asian (17%) adults. Adults younger than 50 are more likely to say this than those ages 50 and older (24% vs. 14%). When it comes to seeking emotional support from online platforms or communities, relatively small shares of adults say they would be extremely or very likely to do this (5% overall). Getting emotional support today versus 20 years ago We also asked Americans how they think men and women are doing compared with 20 years ago in terms of having someone to turn to for emotional support. On balance, the public thinks men and women are doing better in this area than they were two decades ago. Some 47% of adults say men are doing a lot or somewhat better, 20% say they’re doing a lot or somewhat worse, and 32% say they’re doing neither better nor worse. There’s a similar pattern for women: 51% say they’re doing better, 14% say they’re doing worse, and 34% say neither better nor worse. Women are more likely than men to say that men are doing better these days when it comes to having someone to turn to for emotional support (51% vs. 42%). Similar shares of men (50%) and women (52%) say women are doing better compared with 20 years ago. Communicating with friends We also asked Americans about their friends and how they stay in touch with them. About eight-in-ten adults (81%) say they have at least one close friend – not counting their family members – and most (64%) have more than one close friend. About one-in-five (18%) say they don’t have any close friends. Among adults who have close friends, 74% say they connect with one at least a few times a week, whether by texting, interacting on social media, talking on the phone or video chatting, or seeing them in person.  Texting is the most common form of communication between friends. Most adults with close friends (61%) say they text one either a few times a week or daily. Sizable shares also interact with friends on social media (39%) or talk to them via phone or video chat (35%). About three-in-ten (29%) say they see a close friend in person at least a few times a week. Differences by gender and age There are large differences in how often men and women text or interact on social media with close friends. Women are more likely than men to say they communicate frequently in these ways by margins of 10 points or more. Women are also somewhat more likely than men to talk on the phone or video chat with a close friend at least a few times a week (38% vs. 32%). But men (31%) are somewhat more likely than women (28%) to say they frequently see friends in person. This gender gap is fairly consistent across adults ages 30 to 49, 50 to 64, and 65 and older. However, among those younger than 30, men and women are about equally likely to communicate with close friends in these ways. Looking just at age, adults younger than 30 are the most likely to say they text (72%) or interact on social media (60%) with a close friend at least a few times a week. Those ages 65 and older are the least likely to say they regularly use these forms of communication. source

2. Where men and women turn for emotional support and social connection Read More »

EU joins industry backlash against Biden’s AI Chip export restrictions

What the framework says The new Framework creates a three-tier licensing hierarchy. Favored countries such as the 10 EU nations will be able to purchase AI chips, including the most powerful, without restriction. Most countries fall into the middle tier, subject to export licensing restrictions on how much computing power they can get hold of. And then there are coutnries that already can’t buy AI chips from the US, including obvious candidates such as China and Russia. For countries in the middle tier, if an individual order doesn’t exceed a “collective computation power up to roughly 1,700 advanced GPUs,” — the sort of GPU power used by a university or medical institute — no export license will be required. These sales won’t count against national chip quotas. As for LLMs, sales of the most powerful proprietary models will also be restricted outside of the favored countries. The US Department of Commerce’s Bureau of Industry and Security (BIS), which drafted the framework, defines the restricted models as those built using closed (as opposed to open-source) weights using more than 10^26 computational operations. source

EU joins industry backlash against Biden’s AI Chip export restrictions Read More »