Information Week

How to Become a Collaborative IT Team Leader

Collaborative leadership is an increasingly popular approach to staff management that encourages teamwork, cooperation, and shared responsibility. At its heart, it’s about getting all team members working together toward shared goals.  A collaborative leader strives to get the best out of people from across the organization without a personal agenda other than to make things better for the organization, says Rebecca Fox, group CIO with cybersecurity services provider NCC Group. “Collaborative leaders see the good intent in people and their talents but are also not afraid to challenge openly poor behaviors and practices and, of course, praise the best in people for their effort and successes,” she observes in an email interview.  Collaborative IT leaders actively foster an environment of teamwork and open communication, states Matt Robinson, team lead and senior UX design manager with Google Photos. “They prioritize collaboration over hierarchy, ensuring team members feel trusted, valued, and empowered to contribute their ideas and expertise,” he says in an online interview.  Robinson notes that a collaborative leader will understand the importance of cross-functional teamwork within IT and across related business areas. “They’re skilled at breaking down silos, facilitating knowledge sharing, and aligning team efforts with the broader business goals,” he says. “By leveraging the collective strengths of their team, collaborative IT leaders can drive innovation, enhance problem-solving, and ensure that projects are delivered efficiently and effectively.”  Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI Getting Started  Communication is a collaborative environment’s starting point. “Becoming a role model and leading by example encourages the right behaviors,” Fox says. “You can’t expect others to collaborate unless you’re showing these behaviors too.” Motion shouldn’t be confused with action. “Make sure there’s action after you collaborate — and that action shouldn’t be just more collaboration.”  Becoming a collaborative IT team leader is really a matter of personal growth and fostering a more productive and innovative work environment, Robinson says. “It starts with developing strong interpersonal skills and a mindset that values teamwork over individual achievement.”  An important first step, Robinson says, is to listen actively to team members and understand their strengths, challenges, and ideas. “This involves creating opportunities for open dialogue through regular team meetings, one-on-one check-ins, or collaborative tools that encourage communication.”  Related:Curtail Cloud Spend With These Strategies Robinson believes that to be fully effective, team leaders must make a strong commitment to continuous learning and development. “Stay informed about the latest collaborative tools and techniques that can help streamline communication and project management,” he advises. “Investing in your growth and your team can build a more cohesive and collaborative work environment.”  Essential Attributes  IT leadership is not about technology, Fox says. “It’s about people, business, and organizational outcomes.” A successful IT team leader excels in communication and empathy, fostering collaboration, and empowering their team. “They bridge the gap between technical requirements and business goals, ensuring IT initiatives align with the company’s strategic vision.”  Focusing on personal and team credibility sets the stage for growth and long-term effectiveness, says Randy Gross, chief information security officer at training and certification firm CompTIA, via email. “Transparency is the quickest way to demonstrate that credibility and accountability,” he states. Communicating technical concepts with business language lowers the chance of miscommunication. “Developing empathy for IT and business colleagues allows a meaningful and personal touch in crafting any solution.”  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness One of the most important attributes of a successful IT leader is having a strong strategic vision, observes Robin Hamerlinck, CIO at audio technology manufacturer Shure. “Working within a collaborative mindset and breaking silos between businesses to ensure collaboration is critical, but as a leader it’s your responsibility to encourage conversations and remind employees why they matter,” she says in an online interview. “Before you know it, teams will see the value of a collaborative approach and take it upon themselves to establish cross-functional connections.”  Hamerlinck feels that it’s also important to embrace innovation and creativity through collaboration. “At Shure, we’re constantly looking for ways to innovate our products and development processes, while holding onto important components of our technological legacy,” she says. “When I think about innovation and creativity, I think about how we can be even more forward-looking in our technological advancements, create solutions for customers, and prepare my teams for future shifts in the tech landscape.”  Parting Thoughts  Fox acknowledges that getting team members to collaborate can be difficult. “Prepare in advance for what you want,” she recommends. Ensure that all parties are engaged. “If you’re leading the collaboration sessions, prepare yourself mentally for the challenges ahead, and keep in mind that things rarely go according to plan.”  Collaboration is always a team sport, Gross observes. “It’s a choreographed and perfectly executed relay that relies on strong individual performances that together go faster than any individual ever could.”  Building an IT team that’s focused on collaboration is a process, and it can take time to get everyone in your organization onboard with the approach, Hamerlinck advises. “I like to remind IT leaders who are taking a collaborative approach to be patient and to work with their teams to understand the greater vision for your organization.” source

How to Become a Collaborative IT Team Leader Read More »

Overconfidence in Cybersecurity: A Hidden Risk

Overconfidence in cybersecurity is a serious and often overlooked risk. Too many companies believe that investing in the latest tools and hiring top talent guarantees safety. But it doesn’t. Without constantly adapting your strategy, even the best technology won’t protect you.   The greatest danger might not come from hackers, but from your own false sense of security.  It’s easy to think that spending millions on sophisticated tools will keep threats at bay. The more rigid your approach, the more exposed you become. Cyber threats evolve constantly — if you don’t keep up, you’re inviting risk.  Confidence Paradox: More Tools, More Blind Spots  I’ve seen this again and again: It’s what I call the “confidence paradox”. The more tools you add, the more confident you feel. But that confidence can quickly turn into dangerous blind spots.  In one of my engagements with a retail company, their cybersecurity infrastructure had grown significantly over time. They had all the bells and whistles: intrusion detection, endpoint protection. You name it, they had it. The problem was that their IT team was overwhelmed by alerts. Every day, they received so many notifications that they missed the critical ones, resulting in a breach.  This isn’t just a one-off situation. According to BlueKupros, companies with fragmented security solutions are 3.5 times more likely to experience significant security incidents. The more complex the system, the harder it is to manage, and the more likely you are to overlook crucial details.  Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust Case Study: Uber’s Alert Fatigue  Remember Uber’s 2022 data breach? Uber’s 2022 breach shows how alert fatigue and complexity can lead to serious security failures. In this case, the attacker used multi-factor authentication (MFA) fatigue, bombarding an Uber employee with repeated MFA requests until the employee eventually accepted one, allowing unauthorized access. Once inside, the hacker escalated privileges and moved laterally through Uber’s systems, accessing sensitive tools like their bug bounty program and Slack.  This breach shows how even with extensive security tools, teams remain vulnerable when overwhelmed by alerts and unable to prioritize critical threats. Uber’s case shows the risk of depending too heavily on complex systems without ensuring that the human elements — like alert management and training — are equally robust. I’ve seen this same pattern with other clients. The issue isn’t the lack of tools; it’s that their teams can’t handle the noise. When teams are focused on small fires, they tend to miss the bigger, more critical threats.  Related:Juliet Okafor Highlights Ways to Maintain Cyber Resiliency Practical Advice: Streamline, Prioritize, and Audit  So how do you avoid falling into this trap? The answer isn’t more technology: it’s smarter management of the technology you already have.   Here’s how:  Consolidate your tools: Take a close look at the tools you’re using.  Do they overlap? Are they really adding value? Often, less is more. Streamline your tools to reduce clutter and help your team focus on what matters.  Prioritize alerts: Stop trying to manage everything. Use systems that prioritize alerts by severity. You’ll free up your team to focus on the threats that matter, instead of drowning in low-level noise.  Regularly audit your security: Cybersecurity is never a “set it and forget it” task; it requires continuous monitoring and improvement. You need to audit both your tools and your processes regularly. Are they still effective? Are they aligned with the latest threats? And don’t forget to evaluate the human side of things. How is your team handling their workload?  Focus on training: Your people are just as important as your tech. Continuous training ensures that your team is prepared for evolving threats and can better manage their tools. A well-trained team won’t fall into the trap of alert fatigue.  Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors Why This Matters Now  As threats grow to be more sophisticated, companies are doubling down on technology to defend themselves. The more you rely on tools without oversight, the more exposed you become. Don’t assume you’re safe just because you’ve invested heavily in security.  By streamlining, auditing, and focusing on the human element, you can avoid the pitfalls of overconfidence. In cybersecurity, confidence should come from having the right processes and people — not just the latest tools.  By following these steps and learning from cases like Uber, you’ll strengthen your defenses and avoid the dangers of overconfidence. It’s not about having more tech — it’s about using it effectively.  source

Overconfidence in Cybersecurity: A Hidden Risk Read More »

Elon Musk Sues OpenAi, Claiming Breach of Contract

Tech mogul Elon Musk on Thursday filed a lawsuit against OpenAI and its CEO Sam Altman, claiming the maker of generation AI juggernaut ChatGPT violated its founding mission to develop artificial intelligence safely and in an open-source environment. Musk, who was a founder of OpenAI in 2015 along with Altman and Greg Brockman, left the company in 2018 after saying the technology was “potentially more dangerous than nukes.” After Musk’s departure, OpenAI restructured and formed a for-profit arm, gaining major backing from Microsoft, which pledged more than $10 billion to bolster GenAI efforts. The lawsuit also takes aim at the company’s efforts with artificial general intelligence (AGI), which would advance AI research to create human-like intelligence and the ability to self-teach. “To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI benefits all of humanity. In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” according to the lawsuit. OpenAI’s Altman was briefly fired by its board of directors in November. Just days later after Microsoft intervened, Altman was reinstated, and new board members were named. Microsoft scored a non-voting, “advisory” seat on the board. Related:OpenAI’s Dysfunctional Thanksgiving: 5 Key Players in Coup Drama “Under its new Board, [OpenAI] is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity,” the lawsuit stated. Musk’s lawsuit marks the latest in a string of legal woes for Open AI, including other lawsuits concerning the company’s use of copyrighted material and an ongoing investigation by the Federal Trade Commission focused on its investments and partnerships. Last year, Musk signed an open letter alongside many technology luminaries calling for a pause on GenAI research. That didn’t stop Musk from launching his own GenAI service, Grok, a large language model trained on posts from Musk’s X platform, (formerly Twitter). Manoj Saxena, founder and chairman of the Responsible AI Institute (RAI Institute), tells InformationWeek in an interview that Musk’s lawsuit could force a very important conversation. “I do believe that deep inside, Elon has a real point of view and concern,” Saxena says. “I saw it both as a brilliant and a reckless move to launch ChatGPT. There’s no doubt it was the ‘iPhone moment’ of the AI industry. But there are still a lot of parts that need to be put in place and as messy as it is, this is a conversation that needs to be had.” Related:FTC GenAI Probe Hits Google, Amazon, OpenAI, Microsoft and Anthropic Saxena likens AI safety to the car industry’s history creating safeguards. “But in this case, we don’t have 50 years,” he warns. InformationWeek has reached out to OpenAI, Microsoft, and Musk’s attorneys and will update with any response. source

Elon Musk Sues OpenAi, Claiming Breach of Contract Read More »

How to Detect Deepfakes

Deepfakes are a clear and present danger to businesses. According to Markets and Markets, the deepfake market will balloon from $564 million in 2024 to $5.1 billion by 2030, which represents a 44.5% compound annual growth rate.   Deepfakes represent several types of threats including corporate sabotage, enhanced social engineering attacks, identity spoofing, and more. More commonly, bad actors use deepfakes to increase the effectiveness of social engineering.  “It’s no secret that deep fakes are a significant concern for businesses and individuals. With the advancement of AI-generated fakes, organizations must spot basic manipulations and stay ahead of techniques that convincingly mimic facial movements and voice patterns,” says Chris Borkenhagen, chief digital officer and chief information security officer at identity verification and fraud prevention solutions provider AuthenticID, in an email interview. “Detecting deep fakes requires advanced machine learning models, behavioral analysis, and forensic tools to identify subtle inconsistencies in images, videos, and audio. Mismatches in lighting, shadows, and eye movements can often expose even the most convincing deep fakes.”  Organizations should leverage visual and text fraud algorithms that utilize deep learning to detect anomalies in the data underpinning deepfakes. This approach should go beyond surface-level detection to analyze content structure for signs of manipulation.  Related:What CIOs Can Learn from an Attempted Deepfake Call “The responsibility for detecting and mitigating deep fake threats should be shared across the organization, with CISOs leading the way. They must equip their teams with the right tools and training to recognize deep fake threats,” Borkenhagen says. “However, CEO and board-level involvement is important, as deep fakes pose risks that extend beyond fraud. They can damage a brand’s reputation and compromise sensitive communications. Organizations must incorporate deep fake detection into their broader fraud prevention strategies and stay informed about the latest advancements in AI technologies and detection tools.”  Chris Borkenhagen, AuthenticID As deep fakes become more sophisticated, organizations must be prepared with both advanced detection tools and comprehensive response strategies.   “AI-powered solutions like Reality Defender and Sensity AI play a key role in detecting manipulated content by identifying subtle inconsistencies in visuals and audio,” says Ryan Waite, adjunct professor at Brigham Young University-Hawaii and VP of public affairs at digital advocacy firm Think Big. “Tools like FakeCatcher go further, analyzing physiological markers such as blood flow in the face to identify deep fakes. Amber Authenticate adds another layer of security by verifying the authenticity of media files through cryptographic techniques.”  Related:California’s New Deepfake Laws Await Test of Enforcement Deep fake detection should be a priority, with CISOs, data science teams, and legal departments working together to manage these technologies. In addition to detection, companies must implement a deep fake response strategy, he says. This involves:  Having clear protocols for identifying, managing, and mitigating deep fakes. Training employees to recognize manipulated content. Making sure the C-suite understands the risks of impersonation, fraud and reputational damage, and plan accordingly. Staying informed on evolving AI and deep fake legislation is critical. As regulatory frameworks develop, companies must be proactive in ensuring compliance and safeguarding their reputation.   “Combining cutting-edge tools, a robust response strategy, and legislative awareness is the best defense against this growing threat,” says Waite.  How Deepfakes Facilitate Social Engineering  Deepfakes are being used in elaborate scams against businesses by threat actors leveraging synthetic videos, audio, and images to enhance their social engineering attacks, like Business Email Compromise (BEC) and phishing techniques. The use of AI has also made it incredibly easy to produce a deepfake and spread it far and wide. Moreover, there is a wealth of readily available tools on the dark web.  Related:How to Protect Your Enterprise from Deepfake Damage “We have seen evidence of deepfake videos being used in virtual meetings and audio in voicemail or live conversations, deceiving targets into revealing sensitive information or clicking malicious links,” says Azeem Aleem, managing director client leadership, EMEA and managing director of Northern Europe. Financial services firms are especially worried about the use of AI or generative-AI fraud, with Deloitte Insights showing a 700% rise in deepfake incidents in fintech in 2023.”  Other examples of deepfake techniques include “vishing” (voice phishing), Zoom bombing and biometric attacks.  “Hackers are now combining email and vishing with deepfake voice technology, enabling them to clone voices from just three seconds of recorded speech and conduct highly targeted social engineering fraud,” says Aleem. “This evolution makes it possible for attackers to impersonate C-level executives using their cloned voices, significantly enhancing their ability to breach corporate networks.”  Zoom bombing occurs when uninvited guests disrupt online meetings or when attackers impersonate trusted individuals to infiltrate meetings. There are also biometric attacks.  “Businesses frequently use biometric authentication systems, such as facial or voice recognition, for employee verification,” says Aleem. “However, deepfake technology has advanced to the point where it can deceive these systems to bypass customer verification processes, including commands like blinking or looking in specific directions.”  According to accounts payable automation solution provider, Medius, 53% of businesses in the US and UK have been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks.   “Beyond BEC, attackers use deepfakes to create convincing fake social media profiles and impersonate individuals in real-time conversations, making it easier to manipulate victims into compromising their security,” says Aleem. “It’s not necessarily targeted, but it does prey on natural vulnerabilities like human-error and fear. As AI applications develop, deepfakes can be produced to also request profile changes with agents and train voice bots to mimic IVRs. These deepfake voice techniques allow attackers to navigate IVR systems and steal basic account details, increasing the risk to organizations and their customers.”  The business risk is potential fraud, extortion, and market manipulation.  “Deepfakes are disrupting various industries in profound ways. Call centers at banks and financial institutions are grappling with deepfake voice cloning attacks aimed at unauthorized account access and fraudulent transactions,” says Aleem. “In the insurance sector, deepfakes are exploited to submit false evidence for fraudulent claims, causing significant financial losses. Media companies suffer reputational damage

How to Detect Deepfakes Read More »

2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI. . . and Dust

“2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI. . . and Dust“ An InformationWeek Report | Sponsored by Palo Alto Networks There is little to no consensus when it comes to cyber resilience, not on how to do it and not on how to define it. Errors/misconfigurations and equipment degradation caused as many significant disruptions as cyberattacks and third-party cyber incidents, and natural disasters are the top cause of significant issues. InformationWeek embarked on this research to try to decode current cyber resilience trends. Our survey allowed us to gain insights into what today’s cybersecurity professionals think about cyber resilience today. Here are some key findings: Companies are defining “cyber resilience” in a wide variety of ways. Half (48%) of respondents include “maintaining trust with stakeholders” as part of their definition. Despite the need to redistribute IT budget funds to cover unexpected new technology costs like GenAI, about one-quarter (24%) devote 25% or more of their IT budget to cybersecurity. One-quarter of respondents (24%) said they do not have a cyber incident response plan at all. Errors/misconfigurations (18%) and equipment degradation (15%) caused as many significant disruptions as cyberattacks (15%) and third-party cyber incidents (15%). Download this InformationWeek report today to learn more about risk and response initiatives, cyber liability insurance, the effects of GenAI and much more. Offered Free by: Palo Alto Networks See All Resources from: Palo Alto Networks Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. source

2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI. . . and Dust Read More »

Squeezing the Maximum Value Out of Generative AI

In many different areas, talent is an important yet often elusive goal. Just ask anyone whose piano keyboard skills have never moved beyond pecking out the first few measures of “Heart and Soul.” When it comes to generative AI, large language models (LLMs), trained on massive quantities of data, supply the capabilities needed to drive multiple use cases and applications, as well as handle an almost endless array of tasks. To get the most out of generative AI, think of it as a tool rather than a replacement, suggests Daniel Wu, an AI research fellow at Stanford University in an email interview. He notes that LLMs can already do great work. “They’re being used in coding assistance and customer service, but they work best with clear prompting.” Every organization produces large amounts of text as part of its normal business operations, observes Manfred Kügel, data scientist and IoT industry advisor for AI and analytics provider SAS, via email. Before LLMs, organizations needed to perform complex text analytics in order to get value out of unstructured text data, such as maintenance records or shift logs in a production environment. “LLMs can be used to structure text data and prepare it as inputs for machine learning models used for production optimization and predictive maintenance.” Related:How Generative AI Is Changing the Nature of Cyber Insurance Pushing it to the Max To gain maximum value from generative AI, users need to clearly define their problems and objectives, says Kevin Ameche, president of ERP software provider RealSteel, in an email interview. “Identify-specific use cases, such as content generation, data analysis, or automation,” he advises. “Then, ensure you can access high-quality data for training the AI model.” Ameche recommends collaborating with internal or external AI experts to fine-tune and customize their model to align with specific needs. “Continuously evaluate and refine the model’s performance and stay updated with the latest advancements in generative AI technology to maximize its potential for your organization.” To maximize generative AI’s value, users should first understand its inherent capabilities and shortcomings, Kügel says. “We are still in the early days of realizing the full potential of generative AI,” he states. Kügel believes that everyone involved in core business processes should interact with models in the same way they interact with their colleagues. “This will drive quick adoption and encourage organizations to provide the necessary and user-friendly generative AI tools to overcome any structural or cultural hurdles.” Related:Can Generative AI and Data Quality Coexist? Achieving Effectiveness Generative AI’s effectiveness lies in its ability to automate creative processes, generate content, and provide data-driven insights at scale, Ameche explains. “It can handle repetitive tasks, freeing-up human resources for more strategic work.” Meanwhile, the technology’s adaptability and capacity to learn from data make it a valuable tool in various industries. An AI agent can’t read minds. “If you ask a poorly defined question, you’ll get one of any number of valid responses,” Wu says. “But by giving AI a stronger sense of what you’re searching for, either through clear prompts, data, or even model fine-tuning, you’ll get more useful responses.” To empower team members, organizations should invest in generative AI training and development programs, Ameche says. “Start by identifying the specific skills and knowledge needed for working with AI,” he recommends. “Consider partnering with AI vendors or educational institutions for tailored training.” Ameche believes that it’s also important to encourage employees to experiment with AI tools in real-world projects to gain hands-on experience. “Create an environment of continuous learning and provide access to resources, such as online courses, webinars, and AI communities,” he suggests. “Collaboration and knowledge sharing within the team can also accelerate the learning process, helping team members harness the maximum value from generative AI.” Related:10 IT Trends to Watch for This Year Common Mistakes Wu notes there’s a common saying in AI research: junk in, junk out. “Users may inadvertently harm their projects by providing biased datasets or creating poor prompts,” he explains. “Model outputs should always be taken with a hint of salt,” Wu recommends. Both over- and underestimating generative AI’s potential is a serious concern, Kügel says. “So is seeing AI as a threat when an AI model produced insights that we didn’t see ourselves.” As with any breakthrough technology, Kügel sees skepticism among many IT leaders. He highlights that it’s important to clearly show that generative AI augments and supports, rather than replaces, human experts. He recommends taking a balanced approach to AI adoption by deploying guardrails and plausibility checks. “The model should report on its own when it drifts too far from reality,” Kügel says. Final Thought Generative AI holds immense potential for enterprises across many domains, Ameche says. “However, successful implementation requires careful planning, ongoing training, and vigilance to avoid pitfalls.” He believes that organizations should view generative AI as a tool to augment human capabilities, not as a replacement. “When used strategically and responsibly, generative AI can transform efficiency, creativity, and decision-making, driving innovation and competitive advantage.” source

Squeezing the Maximum Value Out of Generative AI Read More »

Gladwell at Gartner Event: Lookout for Radical Problem Superspreaders

Author and speaker Malcolm Gladwell probably wasn’t an obvious choice to give a guest keynote for an audience of IT leaders gathered this week at Gartner’s IT Symposium/Xpo in Orlando, Florida. His books and talks mostly focus on looking at social issues from different perspectives. But a packed ballroom of CIOs and other IT leaders learned that a conventional approach to problem solving could lead to catastrophe — especially when dealing with “radically asymmetrical” problems that don’t adhere to a normal curve of distribution. Gladwell cited several events that defied conventional wisdom, where the culprit was an exception. From a faulty assumption by health officials during the COVID-19 pandemic, to a particularly gifted North Korean cybercriminal, to LA’s explosion of bank robberies in the 1990s — homing in on outliers may have produced better outcomes and solutions, Gladwell contends. While a normal distribution would be the default way of viewing a problem, where the offenders fit into a category along with many others, radically asymmetrical problems defy a normal distribution, placing the culprits on the extreme. Normal distribution “is our default for making sense of the world,” Gladwell said. “When we look at data, it’s going to organize itself in that kind of shape. We have an expectation about the story that data tells us, and the expectation is that the story is going to be about the middle … My question is, what happens when we have a problem where that story doesn’t work?” Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI The COVID-19 Superspreader Conundrum At the onset of COVID-19, one of the earliest reported largescale outbreaks happened after a Biogen event in Boston, Mass. Many attendees were infected and then traveled to various destinations around the country. According to a report in Science, as many as 300,000 people wound up being infected because of that one event. Gladwell said researchers believe the source was a single person — a superspreader — someone who was genetically inclined to release a much higher level of aerosols. “This was the Taylor Swift of aerosols,” Gladwell said. In this case, the source of the problem was an example of a radically asymmetrical one. Had authorities been able to focus their efforts on so-called superspreaders, instead of focusing on social distancing measures for the general public, the outcome may have been different, Gladwell says. Leaders could have responded differently if they thought in terms of radically asymmetrical possibilities. What Leaders Can Learn? Related:Curtail Cloud Spend With These Strategies Gladwell offered up other scenarios that illustrated his point about radical asymmetry, including a slew of bank robberies in 1990s Los Angeles, the ongoing opioid crisis, and one that hit very close to home for attendees: the case of Park Jin Hyok, a North Korean hacker at the center of several massive cyberattacks, including the massive 2014 hack of Sony Pictures. “If you talk to people who are in the … high-end cybersecurity business, they will say, ‘Look, a lot of the time, all I’m doing is worried about Park Jin. I’m not worried about those hundreds of everyday hackers in Romania or Bulgaria. No, this one guy who had little crew somewhere in North Korea, and he’s the one who keeps me up tonight.’ That is a radical and radically asymmetrical distribution.” The number of doctors who overprescribed opioids in the 1990s was a relatively small subset of doctors. But those exceptional cases launched an epidemic of drug addiction still lingering today, Gladwell says. The slew of bank robberies in Los Angeles — many of those were carried out by one gang. All these problems may have been better addressed if there was more focus on radical asymmetric problems. “So, I would simply say to all of you as you go back home after this conference is over and you participate in society … When a problem comes up and people come up with solutions, just raise your hand and say, ‘Before we go any further, what’s the shape of the curve?’” Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness source

Gladwell at Gartner Event: Lookout for Radical Problem Superspreaders Read More »