Information Week

What Could the Trump Administration Mean for Cybersecurity?

The results of the 2024 US presidential election kicked off a flurry of speculation about what changes a second Donald Trump administration will bring in terms of policy, including cybersecurity.  InformationWeek spoke to three experts in the cybersecurity space about potential shifts and how security leaders can prepare while the industry awaits change.    Changes to CISA  In 2020, Trump fired Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs after he attested to the security of the election, despite Trump’s unsupported claims to the contrary. It seems that the federal agency could face a significant shakeup under a second Trump administration.  “The Republican party … believes that agency has had a lot of scope creep,” AJ Nash, founder and CEO of cybersecurity consultancy Unspoken Security, says.   For example, Project 2025, a policy playbook published by conservative think tank The Heritage Foundation, calls to end “… CISA’s counter-mis/disinformation efforts.” It also calls for limits to CISA’s involvement in election security. The project proposes moving the CISA to the Department of Transportation.  Trump distanced himself from Project 2025 during his campaign, but there is overlap between the playbook and the president-elect’s plans, the New York Times reports.   Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust “I think it safe to say that CISA is going to have a lot of changes, if it exists at all, which I think [is] challenging because they have been very responsible for both election security and a lot of efforts to curb mis-, dis- and malinformation,” says Nash.   AI Executive Order  In 2023, President Biden signed an executive order regarding AI and major issues that arose in the wake of its boom: safety, security, privacy, and consumer protection. Trump plans to repeal that order.   “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing,” according to a 2024 GOP Platform document.   Less federal oversight on the development of AI could lead to more innovation, but there are questions about what a lack of required guardrails could mean. AI, how it is developed and used, has plenty of ramifications to cybersecurity and beyond.   “The tendency of generative AI to hallucinate or confabulate … that’s the concern, which is why we have guardrails,” points out Claudia Rast, chair of the intellectual property, cybersecurity, and emerging technology practice at law firm Butzel Long.   Related:Next Steps to Secure Open Banking Beyond Regulatory Compliance While the federal government may step back from AI regulation, that doesn’t mean states will do the same. “You’re going to see … California [and] Texas … and other states taking a very proactive role,” says Jeff Le, vice president of global government affairs and public policy at cybersecurity ratings company SecurityScorecard.    California Governor Gavin Newsom signed several bills relating to the regulation of GenAI. A bill — the Texas Responsible AI Governance Act (TRAIGA) — was introduced in the Lone Star State earlier this year.   Cybersecurity Regulation  The Trump administration is likely to roll back more cybersecurity regulation than it will introduce. “I fully anticipate there to be a significant slowdown or rollback on language or mandated reporting, incident reporting as a whole,” says Le.   Furthermore, billionaire Elon Musk and entrepreneur Vivek Ramaswamy will lead the new Department of Government Efficiency, which will look to cut back on regulation and restructure federal agencies, Reuters reports.  But enterprise leaders will still have plenty of regulatory issues to grapple with. “They’ll be looking at the European Union. They’ll be looking at regulations … coming out of Japan and Australia … they’ll also be looking at US states,” says Le. “That’s going to be more of a question of how they’re going to navigate this new patchwork.”  Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors Cyber Threat Actors   Nation state cyber actors continue to be a pressing threat, and the Trump administration appears to be planning to focus on malicious activity coming out of China, Iran, North Korea, and Russia.  “I do anticipate the US taking a more aggressive stance, and I think that’s been highlighted by the incoming national security advisor Mike Waltz,” says Le. “I think he has made a point to prioritize a more offensive role, and that’s with or without partners.”  Waltz (R-Fla.) has been vocal about combatting threats from China in particular.   Preparing for Change  Predicting a political future, even just a few short months away, is difficult. With big changes to cybersecurity ahead, what can leaders do to prepare?  While uncertainty prevails, enterprise leaders have prior cybersecurity guidelines at their fingertips today. “It’s time to deploy and implement the best practices that we all know are there and [that] people have been advising and counseling for years at this point,” says Rast. source

What Could the Trump Administration Mean for Cybersecurity? Read More »

DHS Releases Secure AI Framework for Critical Infrastructure

The US Department of Homeland Security (DHS) has released recommendations that outline how to securely develop and deploy artificial intelligence (AI) in critical infrastructure. The recommendations apply to all players in the AI supply chain, starting with cloud and compute infrastructure providers, to AI developers, and all the way to critical infrastructure owners and operators. Recommendations for civil society and public-sector organizations are also provided. The voluntary recommendations in “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” look at each of the roles across five key areas: securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact. There are also technical and process recommendations to enhance the safety, security, and trustworthiness of AI systems. AI is already being used for resilience and risk mitigation across sectors, DHS said in a release, such as AI applications for earthquake detection, stabilizing power grids, and sorting mail. The framework looks at each role’s responsibilities: Cloud and compute infrastructure providers need to vet their hardware and software supply chain, implement strong access management, and protect the physical security of data centers powering AI systems. The framework also has recommendations on supporting downstream customers and processes by monitoring for anomalous activity and establishing clear processes for reporting suspicious and harmful activities. AI developers should adopt a secure by design approach, evaluate dangerous capabilities of AI models, and “ensure model alignment with human-centric values.” The framework further encourages AI developers to implement strong privacy practices; conduct evaluations that test for possible biases, failure modes, and vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers. Critical infrastructure owners and operators should deploy AI systems securely, including maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services, or benefits to the public. Civil society, including universities, research institutions, and consumer advocates engaged on issues of AI safety and security, should continue working on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases. Public sector entities, including federal, state, local, tribal, and territorial governments, should advance standards of practice for AI safety and security through statutory and regulatory action. “The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, Internet access, and more,” said DHS secretary Alejandro N. Mayorkas, in a statement. The DHS framework proposes a model of shared and separate responsibilities for the safe and secure use of AI in critical infrastructure. It also relies on existing risk frameworks to enable entities to evaluate whether using AI for certain systems or applications carries severe risks that could cause harm. “We intend the framework to be, frankly, a living document and to change as developments in the industry change as well,” Mayorkas said during a media call. source

DHS Releases Secure AI Framework for Critical Infrastructure Read More »

How Will AI Shape the Future of Cloud and Vice Versa?

The interwoven relationship between the cloud and artificial intelligence continues to grow complex as demand for these resources increases. AI can be put to work to increase efficiency to run the cloud while the cloud can be the platform where AI is developed and does its heavy lifting. It is no secret that many organizations want to leverage AI, sometimes in ways that might not be clear yet. That chase to put AI to work, in the wake of the cloud transformation age, could lead to unexpected developments for both spheres of technology. Will the pace of AI’s rise change the nature of the cloud? What types of cloud systems and resources stand to benefit from, or need to adapt to, AI? In this episode, Sundaram Lakshmanan (left in video), CTO with Lookout, and Amrit Jassal (top right), co-founder and CTO of Egnyte, share their insights on how this space has shaped in the AI era and the potential road ahead. Listen to the full podcast here. source

How Will AI Shape the Future of Cloud and Vice Versa? Read More »

Unicorn AI Firm Writer Raises $200M, Plans to Challenge OpenAI, Anthropic

The enterprise AI arms race shows no signs of slowing. Writer, a San Francisco-based startup founded in 2020, on Tuesday announced a $200 million series C venture capital boost, giving the company unicorn status at a $1.9 billion valuation. Writer will use the new funding to accelerate development of its AI solutions and applications for use in healthcare, retail, and financial services. Last month, Writer launched its enterprise large language model (LLM) trained on synthetic data, which is becoming an increasingly popular training method as company’s look to minimize privacy concerns. “We’re not just creating LLMs that can execute tasks but developing advanced AI systems that deliver mission-critical enterprise work,” Writer CEO and Co-Founder May Habib said in a statement. “With this new funding, we’re laser-focused on delivering the next generation of autonomous AI solutions that are secure, reliable, and adaptable in highly complex, real-world enterprise scenarios.” According to Statista, the global AI market size in 2024 stood at $184 billion. In 2023, AI startups saw valuations 20% higher than non-AI companies raising seed funding. At the Series B level, AI startup valuations were nearly 60% higher. As companies race to add generative AI (GenAI) capabilities, AI vendors are releasing enterprise-grade solutions at a quick pace. OpenAI, Anthropic, AWS, Salesforce, Oracle, and many other vendors are looking to cash in on booming AI demand. Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Premji Invest, Radical Ventures and ICONIQ Growth led the funding round. Writer’s backers include Salesforce, Adobe, Citi, IBM, and Workday venture arms, along with Accenture, Vanguard and other major tech venture capital investors. The company boasts a client list that includes Ally Bank, Qualcomm, Salesforce, Uber, and more. Writer’s $1.9 billion valuation is quadruple the company’s September 2023 valuation of $500 million. Bloomberg Intelligence expects GenAI to reach a market size of $1.3 trillion by 2032 with a compound annual growth rate of 43%. As of March 2024, the AI sector had 214 unicorns (companies with at least a $1 billion valuation) globally, according to Edge Delta. OpenAI is currently the most valuable of those companies with an $80 billion valuation. source

Unicorn AI Firm Writer Raises $200M, Plans to Challenge OpenAI, Anthropic Read More »

Smart Service Management–Easy Automation for Manual IT Tasks

“Smart Service Management–Easy Automation for Manual IT Tasks“ Work better together with a connected enterprise–integrate and automate ITSM to streamline ticket requests. Discover how integrating automation with your IT Service Management software can transform your operations. From managing ticket requests to handling complex processes like employee onboarding, this approach optimizes workflows and improves service delivery, enabling seamless collaboration across departments like IT, HR and Facilities. Key Benefits:– Automate complex workflows such as employee onboarding and offboarding across systems– Integrate multiple platforms effortlessly with a no-code solution– Manage tickets, service requests and projects all in one unified platform– Empower different teams to build and modify ESM apps without technical expertise Learn how you how can elevate your organization’s efficiency through smart service management! Offered Free by: TeamDynamix See All Resources from: TeamDynamix source

Smart Service Management–Easy Automation for Manual IT Tasks Read More »

Getting a Handle on AI Hallucinations

AI hallucination occurs when a large language model (LLM) — frequently a generative AI chatbot or computer vision tool — perceives patterns or objects that are nonexistent or imperceptible to human observers, generating outputs that are either inaccurate or nonsensical.  AI hallucinations can pose a significant challenge, particularly in high-stakes fields where accuracy is crucial, such as the energy industry, life sciences and healthcare, technology, finance, and legal sectors, says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte. With generative AI’s emergence, the importance of validating outputs has become even more critical for risk mitigation and governance, she states in an email interview. “While AI systems are becoming more advanced, hallucinations can undermine trust and, therefore, limit the widespread adoption of AI technologies.”  Primary Causes  AI hallucinations are primarily caused by the nature of generative AI and LLMs, which rely on vast amounts of data to generate predictions, Ammanath says. “When the AI model lacks sufficient context, it may attempt to fill in the gaps by creating plausible sounding, but incorrect, information.” This can occur due to incomplete training data, bias in the training data, or ambiguous prompts, she notes.  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report LLMs are generally trained for specific tasks, such as predicting the next word in a sequence, observes Swati Rallapalli, a senior machine learning research scientist in the AI division of the Carnegie Mellon University Software Engineering Institute. “These models are trained on terabytes of data from the Internet, which may include uncurated information,” she explains in an online interview. “When generating text, the models produce outputs based on the probabilities learned during training, so outputs can be unpredictable and misrepresent facts.”  Detection Approaches  Depending on the specific application, hallucination metrics tools, such as AlignScore, can be trained to capture any similarity between two text inputs. Yet automated metrics don’t always work effectively. “Using multiple metrics together, such as AlignScore, with metrics like BERTScore, may improve the detection,” Rallapalli says.  Another established way to minimize hallucinations is by using retrieval augmented generation (RAG), in which the model references the text from established databases relevant to the output. “There’s also research in the area of fine-tuning models on curated datasets for factual correctness,” Rallapalli says.  Related:Inside The Duality of AI’s Superpowers Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. “For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article.” Such methods could help detect hallucinations better, she notes.  Ammanath believes that detecting AI hallucinations requires a multi-pronged approach. She notes that human oversight, in which AI-generated content is reviewed by experts who can cross-check facts, is sometimes the only reliable way to curb hallucinations. “For example, if using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be easy to identify and the outcomes are lower stakes for the enterprise,” Ammanath explains. Yet when it comes to applications that include mission-critical business decisions, error tolerance must be low. “This makes a ‘human-in the-loop’, someone who validates model outputs, more important than ever before.”  Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability Hallucination Training  The best way to minimize hallucinations is by building your own pre-trained fundamental generative AI model, advises Scott Zoldi, chief analytics officer at analytics software company FICO. He notes, via email, that many organizations are now already using, or planning to use, this approach utilizing focused-domain and task-based models. “By doing so, one can have critical control of the data used in pre-training — where most hallucinations arise — and can constrain the use of context augmentation to ensure that such use doesn’t increase hallucinations but re-enforces relationships already in the pre-training.”  Outside of building your own focused generative models, one needs to minimize harm created by hallucinations, Zoldi says. “[Enterprise] policy should prioritize a process for how the output of these tools will be used in a business context and then validate everything,” he suggests.  A Final Thought  To prepare the enterprise for a bold and successful future with generative AI, it’s necessary to understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, Ammanath says. “AI hallucinations help to highlight both the power and limitations of current AI development and deployment.”  source

Getting a Handle on AI Hallucinations Read More »

Modernizing ITSM in the Public Sector

“Modernizing ITSM in the Public Sector“ Brought to you by TeamDynamix As state and local governments are looking to modernize legacy systems and kick off digital transformation strategies – many are finding their existing IT Service Management (ITSM) tools can’t support these initiatives. Rather than spending excessive time and money trying to get these tools to fit, public sector organizations are investing in new, codeless ITSM platforms that include Project Portfolio Management (PPM), Enterprise Service Management (ESM) capabilities and integration and automation – all on a single platform. Read this eBook to Learn How Your Peers are Changing the Way They Run the IT Service Desk: City of Madison Leverages No-Code Tech to Transform IT Service Delivery Pima County Supercharges ITSM with Automation City of Buffalo Automates ITSM & Improves Self-Service Adoption Oklahoma City Drives Efficiency for the Technicians City of Avondal Embraces Enterprise Service Management Offered Free by: TeamDynamix See All Resources from: TeamDynamix source

Modernizing ITSM in the Public Sector Read More »

How AI is Reshaping the Food Services Industry

The food services industry might seem an unlikely candidate for AI adoption, yet the market, which includes full-service restaurants, quick-service restaurants, catering companies, coffee shops, private chefs, and a variety of other participants, is rapidly recognizing AI’s immediate and long-term potential.  AI in food services is poised for widespread adoption, predicts Colin Dowd, industry strategy senior manager at Armanino, an accounting and consulting firm. “As customer expectations shift, companies will be forced to meet their demands through AI solutions that are similar to their competitors,” he notes in an email interview.  Mike Kostyo, a vice president with food industry consulting firm Menu Matters, agrees. “It’s hard to think of any facet of the food industry that isn’t being transformed by AI,” he observes via email. Kostyo says his research shows that consumers want lower costs –making it easier to customize or personalize a meal — and faster service. “We tell our clients they should focus on those benefits and make sure they’re clear to consumers when they implement new AI technologies.”  Seeking Insights  On the research side, AI is being used to make sense out of the data deluge firms currently face. “Food companies are drowning in research and data, both from their own sources, such as sales data and loyalty programs, and from secondary sources,” Kostyo says. “It’s just not feasible for a human to wade through all of that data, so today’s companies use AI to sift through it all, make connections, and develop recommendations.”  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report AI can, for example, detect that spicy beverages are starting to catch on when paired with a particular flavor. “So, it may recommend building that combination into a new menu option or product,” Kostyo says. It can do this constantly over time, taking into account billions of data points, creating innovation starting positions. “The team can take it from there, filling their pipeline with relevant products and menu items.”  Data collected from multiple sources can also be used to track customer preferences, providing early insights on emerging flavor trends. “For example, Campbell’s and Coca-Cola are currently using AI in tandem with food scientists to create new and exciting flavors and dishes for their customers based on insights collected from both internal and external data sources,” Dowd says. “This approach can also be applied to restaurants and other locations that rely on recipes.”  Management and Innovation  AI can also optimize inventory management. “AI is being used to determine when to order, and how much inventory a company needs to purchase, by analyzing historical data and current trends,” Dowd says. “This allows the restaurant to maintain ideal inventory levels, reduce waste and better ensure that the restaurant always has the necessary ingredients.”  Related:Inside The Duality of AI’s Superpowers When used as an innovation generator, AI can inspire fresh ideas. “Sometimes, when you get in that room together to come up with a new menu item or product, just facing down that blank page is the hardest part,” Kostyo observes. “You can use AI for some starter ideas to work with.” He says he loves to feed outlandish ideas into AI, such as, ‘What would a dessert octopus look like?’ “It may then develop this really wild dessert, like a chocolate octopus with different-flavored tentacles.”  Customer Experience  AI promises to help restaurants provide a consistently positive experience to consumers, says Jay Fiske, president of Powerhouse Dynamics, an AI and IoT solutions provider for major multi-site food service firms, including Dunkin’, Arby’s, and Buffalo Wild Wings. He notes in an email interview that AI and ML can be used to flag concerning data, indicating potential problems, such as frozen meat going into the oven before it should, or predicting a likely freezer breakdown sometime within the next two weeks. “In these situations, facility managers have time to quickly preempt any issues that could cost them money, as well as their reputations with consumers,” he says.  Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability Another way AI is transforming the food services industry is by providing more efficient and reliable energy management. “This is important, because restaurants, ghost kitchens, and other food service businesses are extremely energy intensive,” Fiske says. Refrigerators, freezers, ovens, dish washers, fryers, and air conditioners all consume massive amounts of power that can be controlled and optimized by AI.  Future Outlook  The sky is the limit for food services industry AI, Kostyo states, noting that market players are taking various approaches. Some are excited about AI, and afraid to get left behind, so they’re jumping right into these tools, while others are a little more skittish, concerned about ethical and privacy issues.  Kostyo urges AI adopters to periodically monitor their customers’ AI acceptance level. “In some ways, customers are very open to AI,” he says. “Forty-six percent of consumers told us they’re already using AI to assist with food decisions in some fashion, such as deciding what to cook or where to eat.” Kostyo adds that 59% of surveyed consumers believe that AI can develop a recipe that’s just as delicious as any human chef could create.  On the other hand, people still often crave a human touch. Kostyo reports that 66% of consumers would still rather have a dish that was created by a human chef. “Consumers frequently push back when they see AI being used in a way that would take a human job.”  Service First  Kostyo urges the food industry to use AI in ways that will enhance the overall consumer experience. “At the end of the day, we are the hospitality industry, and we need to remember that.”  source

How AI is Reshaping the Food Services Industry Read More »

Shedding Light on Your Shadow IT

Shadow IT has long been a problem for companies, from personal devices brought into the workplace to untested software installed inside the perimeter. As companies have moved to cloud, the problem has only become more tangled: Well-meaning employees set up unsanctioned services, and technical teams use unapproved cloud services to add functionality to their projects.  Plus, remote employees and their mashup of consumer and pro-sumer technologies bring less visibility and more risks into the IT-security equation.   According to HashiCorp’s 2024 study, only 8% of companies had “highly mature” practices across both infrastructure and security lifecycle management. Add to that mix the chaos of a merger or divestiture, and problems can grow quickly. The blending of two technology platforms in a merger or the breaking apart of common infrastructure in a divestiture likely leads to breakage and the loss of security oversight.   Managing shadow IT is an ongoing challenge that requires a combination of technical controls, governance processes, and cultural change to address it effectively. Here are three ways that companies can get a handle on shadow IT.  1. SSO is necessary, but far from sufficient. A common way to gain visibility into cloud and on-premises services is to rely on single sign-on (SSO) platforms to know which applications and services employees are using. The challenge, however, is that not every application is SSO-enabled, especially cloud or mobile applications on employees’ personal devices that are often used for work.  Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust Separations and divestitures produce duplicates of most critical services, new devices for employees, and the need for a revamp of all security controls, as a company moves from legacy services to a new platform. During these times, detection, analysis and response to threats (DART) can be particularly challenging.  The lesson for corporate security teams is not only to gain visibility, but to create a backend process that educates employees and diverts them from non-approved risky applications to approved platforms.  2. Assets must be discovered across hybrid infrastructure. Another challenge is the proliferation of remote and mobile workers, whose devices — often poorly managed — exist in home offices or often connect from the road.   For in-house workers, companies have default control over on-premises technology, even if that technology is non-sanctioned shadow IT. To help manage remote technology, companies should have agents on any device connecting to a corporate cloud service or using a virtual private network. Such security can be sufficient, depending on how your company implements the defenses and checkpoints.  Related:Juliet Okafor Highlights Ways to Maintain Cyber Resiliency During a merger, organizations must gain clear visibility of all IT assets across the new enterprise and enforce a zero-trust approach to any access to sensitive corporate data. During a separation, organizations may lose visibility of devices and applications, resulting in shadow IT and potential vectors of attack.  The transition to remote work caused by the coronavirus pandemic forced many companies to switch to secure web gateways to enforce policies with in-house and remote employees. Companies should focus on additional zero-trust security measures to enforce security policies even when employees are outside of the corporate firewall.  3. Cultural changes are necessary. Organizations must make sure that every cloud service supports their mission of security, and no technology is unmanaged. This is especially true during challenging events, such as a merger or divestiture.  Shadow IT comes from a culture that treats the security teams as gatekeepers that can be evaded. According to software supply-chain firm Snyk, more than 80% of companies have developers skirting security policies and using AI code completion tools to generate code. ChatGPT and other large language models (LLMs) became the top shadow IT in 2023, months after release.   Related:Next Steps to Secure Open Banking Beyond Regulatory Compliance Companies need to show employees why security is necessary to keep the business running and what the consequences could be if that focus is lost. Keeping that focus is admittedly difficult, especially when companies often go through a cycle of alternately emphasizing security and cost savings.  Effective management of shadow IT calls for a combination of strong technical measures and cultivating a culture of security awareness, thereby reducing the risks associated with unapproved tools and services. In times of rapid digital transformation, especially during mergers and divestitures, creating a flexible IT infrastructure that adapts to change is key to safeguarding security and maintaining trust across the business.  source

Shedding Light on Your Shadow IT Read More »