Information Week

Navigating the Risks: Why SaaS Management is Crucial for Compliance and Security in Healthcare & Finance

“Navigating the Risks: Why SaaS Management is Crucial for Compliance and Security in Healthcare & Finance“ Thursday, November 14, 2024 – On-Demand In today’s world of remote work and SaaS proliferation, organizations in healthcare and finance face increasing challenges in managing data as it moves to SaaS applications, all while ensuring compliance with HIPAA, NIST, PCI DSS and SOC2. Join Auvik SaaS Management’s Director & Tech Evangelist, Steve Petryschuk, and Senior Product Manager, Ben Botti, as they explore the critical role of SaaS management in improving security and maintaining compliance in these highly regulated industries. This webinar will cover:• Mitigating security risks with effective SaaS discovery.• Strategies for safeguarding data and preventing unauthorized access.• Live demo of Auvik’s automated SaaS visibility, risk management, and compliance reporting. Learn how to streamline your SaaS operations, reduce wasted licenses, and ensure adherence to regulatory standards in your organization. Speakers:Steve Petryschuk, Director & Tech Evangelist, AuvikAt Auvik, Steve works with prospects, clients, and the IT community at large to identify, research, and analyze complex IT Operations challenges, helping guide the Auvik roadmap to better service the IT community. Steve holds a Bachelor of Engineering and Management and is a registered Professional Engineer in Ontario with IT, networking, and IT security experience spanning product management, devops, systems admin, solutions engineer, and technical trainer roles. Ben Botti, Senior Product Manager – SaaS Management, AuvikBen specializes in partnering with MSPs and IT teams to address their most critical challenges, with nearly 15 years of expertise in the space. Initially honing his skills within a regional MSP, he now contributes to developing tools designed to support IT teams as a Senior Product Manager at Auvik. His background covers Automation, Telecom, and now, overseeing solutions in Auvik’s SaaS Management product, reflecting his commitment to enhancing and building the best tools for frictionless IT. Moderator: Michael Krieger, InformationWeek Offered Free by: Auvik Networks, Inc. See All Resources from: Auvik Networks, Inc. source

Navigating the Risks: Why SaaS Management is Crucial for Compliance and Security in Healthcare & Finance Read More »

Enterprise Service Management

“Enterprise Service Management | Going Beyond IT“ Brought to you by TeamDynamix Embrace a unified approach for managing all your organization’s transactional requests with Enterprise Service Management (ESM) software. Originating from IT Service Management (ITSM), ESM extends its capabilities beyond IT, transforming processes across departments such as HR, marketing, facilities, and legal.Discover the evolution and benefits of ESM:-Centralized Platform: Streamline operations by using a single platform across various departments to handle all service requests efficiently.-Advanced ITSM Foundations: Built on decades of ITSM practices, ESM incorporates asset management, change management, and ITIL standards, ensuring best practices and reliability.-Service Request Efficiency: Simplify request management from basic IT tasks like password resets to complex issues such as performance troubleshooting, all managed through an intuitive ticketing system.-Enhanced Self-Service Portals: Empower end-users with a rich service catalog and comprehensive Knowledge Base, promoting self-directed solutions and reducing the burden on support teams.-Integrated Project Portfolio Management (PPM): With solutions like TeamDynamix, monitor all transactional and project work in a single view, enabling efficient resource planning and workload balance. Transition seamlessly from ITSM to ESM and harness the power of this versatile framework across your organization. Download our whitepaper to explore the essential elements for a successful ESM implementation. Download this whitepaper to understand the key building blocks for going from ITSM to ESM. Offered Free by: TeamDynamix See All Resources from: TeamDynamix source

Enterprise Service Management Read More »

State of ITSM in Retail

“State of ITSM in Retail“ An InformationWeek Report | Sponsored by TeamDynamix In today’s competitive market, retailers must harness the power of robust technology service delivery to thrive. Whether managing a complex e-commerce platform or a network of physical stores—or often both—retailers rely heavily on dependable IT systems. These systems pave the way for a seamless omnichannel customer experience, personalized marketing, streamlined supply chain operations, and flawless business functionality. Key Benefits Could Include,-Enhanced Efficiency: Automation reduces administrative costs and improves Service Level Agreements (SLAs).-Future-Ready Solutions: Introduce self-service functionality and leverage emerging technologies to stay ahead.-Enterprise Service Management (ESM) Innovation: Extend ITSM capabilities across retail departments for comprehensive service enhancements. Discover how cutting-edge IT solutions can elevate your retail operations. Download our full report to learn more. Offered Free by: TeamDynamix See All Resources from: TeamDynamix Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. source

State of ITSM in Retail Read More »

Automating Accessibility Testing for a More Inclusive World

Accessibility testing, often referred to as a11y testing, has traditionally depended heavily on manual processes. However, the integration of automation is transforming our approach to achieving accessibility compliance. The term “a11y” is a numeronym for “accessibility,” where the number 11 represents the count of letters between the “a” and the “y.” At Chase, our A11y efforts are driven by feedback from our customers with disabilities. Because we release a new version of the Chase Mobile® App every two weeks in addition to maintaining Chase.com without compromising accessibility, testing and automation are crucial. Importance and Role of Accessibility Automation Automated accessibility testing is crucial for several reasons. It has emerged as a powerful ally in the quest for digital accessibility and helps ensure digital content is inclusive and compliant with accessibility standards, such as Web Content Accessibility Guidelines (WCAG), and to reduce the risk of discrimination lawsuits. By leveraging automated tools and scripts, organizations can efficiently identify and rectify accessibility issues across various digital interfaces, including websites, mobile applications, and software platforms. Test Accuracy and Precision Automation can cover various aspects of accessibility, including keyboard navigation, semantic HTML structure, Accessible Rich Internet Applications (ARIA) attributes and more. This comprehensive coverage helps identify a wide range of accessibility issues. One of the primary advantages of accessibility automation is its ability to provide accurate and precise results. Automated tools can detect accessibility violations with a high degree of accuracy, offering detailed insights into compliance issues that might otherwise go unnoticed. Benefits of Accessibility Automation The benefits of a11y automation are vast. It saves time and resources, catches issues early in the development cycle, and provides consistent and scalable testing. Early Detection: Automation tools can quickly identify accessibility violations during development, allowing teams to address issues early in the process. Increased Efficiency: Automation accelerates the testing process by swiftly scanning for accessibility violations, minimizing the time and effort required for comprehensive evaluations. Consistency & Reliability: Automated tests deliver consistent results, reducing human error and ensuring a reliable assessment of accessibility compliance. Cost-Effectiveness: Over time, automated accessibility testing proves cost-effective by reducing the need for extensive manual testing efforts. Our Technical Approach Our approach to accessibility automation is grounded in creating a seamless and efficient process that ensures accessibility from the earliest stages of development. External accessibility testing tools and software, combined with our custom Unified Digital Framework (UDF) scripts, plays a critical role in this process by identifying both common and complex accessibility issues before they reach production. In short, UDF scripts provide flexibility and depth in our accessibility testing, allowing us to go beyond out-of-the-box solutions and ensure our products meet both basic and advanced accessibility standards. They allow us to improve test coverage and ensure a more inclusive digital experience by developing customized checks for complex scenarios, dynamic content, and ARIA and enhanced semantic HTML testing. By leveraging built-in accessibility rules, we can automate the detection of common accessibility issues such as missing document-title, empty-heading, html-has-lang, html-lang-valid, and meta-viewport without extensive manual testing. These are often the most frequent accessibility barriers that prevent users with disabilities from fully interacting with digital products. The built-in tool’s ability to check for these issues directly within our code allows us to fix them early and rapidly, contributing to a faster time-to-market and higher overall quality. However, accessibility is more than just catching low-hanging fruit. That’s where our UDF scripts come in. These scripts allow us to extend the tool’s capabilities and address more nuanced accessibility concerns, such as ensuring smooth keyboard navigation and focus management, avoiding keyboard traps, and maintaining a clear semantic HTML structure. Our use of ARIA attributes ensures that dynamic content is properly announced to screen readers, making our web applications more inclusive. This dual approach, combining the power of automated testing with the precision of customized UDF scripts, forms the backbone of our accessibility strategy. It allows us to confidently meet WCAG guidelines while improving the overall usability of our products, ensuring that accessibility is an integral part of our development lifecycle rather than an afterthought. Challenges of Accessibility Automation Despite its benefits, accessibility automation also presents unique challenges: Complexity of Testing: Automated tests may struggle with complex interactions or dynamic content, leading to potential false positives or negatives. Technical Limitations: Some accessibility issues require human judgment and intuition to identify accurately, challenging the efficacy of purely automated approaches. The Hybrid Approach: Automation and Manual Testing To address these challenges, a hybrid approach combining automated testing and manual testing with expert audits is often the most effective strategy. Automation provides speed and scalability, while human intervention ensures accuracy and depth in accessibility evaluations. A recent hybrid approach test scenario resulted in 52% of applicable checkpoints being tested using automation, with the remaining tested manually. As we continue to test, we’ll be able to evaluate our speed, scalability and effectiveness to increase our ratio of automated checkpoints. Applying Automation Testing to Mobile Apps For the Chase Mobile App, automated accessibility checks are powered to scan through hundreds of screens on both the Android and iOS apps focusing primarily on 5 WCAG checkpoints. A set of validations that may have previously required more than 5 hours of effort using several tools can now be performed within minutes, reducing our time to market significantly. To achieve this, we leverage the capabilities of a third-party software development kit (SDK) through a single point integration on our UDF. This allows us to utilize and extend the tool capabilities across our entire suit of automated tests via a runtime command. WCAG, WCAG 2.0 level validations for Contrast, Accessibility Name, Focus Trap are automatically triggered as test scripts execute. Checkpoint validations are continuously and seamlessly performed as each test script flows from one screen to the other. A report is generated per test script, furnished with a screenshot and details that include the id of the element, nature and the severity of the non-compliance, and a detailed description of the WCAG checkpoint violation detected. We are also

Automating Accessibility Testing for a More Inclusive World Read More »

The Essential Tools Every AI Developer Needs

AI development, like the technology itself, is still in its early stages. This means that many development tools are also emerging and advancing.  Over the past several months, we’ve seen the rise of a new technology stack when it comes to AI application development, as the focus shifts from building machine learning models to building AI solutions, says Maryam Ashoori, director of product management for watsonx.ai at IBM, in an email interview. “To navigate exponential leaps in AI, developers must translate groundbreaking AI research into real-world applications that benefit everyone.”  Essential Tools  Current AI tools provide a comprehensive ecosystem supporting every stage of the AI development process, says Savinay Berry, CTO and head of strategy and technology at cloud communications services provider Vonage, in an online discussion. A wide array of tools helps developers create and test code, manage large datasets, and build, train and deploy models, allowing users to work efficiently and effectively, he notes. “They also facilitate the interpretation of complex data, ensure scalability through cloud platforms, and offer robust management of data pipelines and experiments, which are crucial for the continuous improvement and success of AI projects.”  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Within the current AI landscape, there are a variety of essential development tools, Ashoori states, including integrated development environments (IDEs) for efficient coding, version control tools for collaboration, data management offerings for quality input, cloud platforms for scalability and access to GPUs, and collaboration tools for team synergy. “Each is critical for streamlined, scalable AI development,” she says.  Every AI developer should have a minimum set of tools that cover various aspects of development, advises Utkarsh Contractor, vice president of AI at generative AI solutions firm Aisera and a generative AI senior research fellow at Stanford University. “These include an IDE such as VS Code or Jupyter Notebook, a version control system like GitHub, and open-source frameworks like PyTorch and TensorFlow for building models.” He believes that data manipulation and visualization tools, like Pandas, Matplotlib, and Apache Spark, are essential, along with monitoring tools, such as Grafana. Contractor adds that access to compute resources and GPUs, either locally or in the cloud, are also critical for quality AI development.  GitHub Copilot, an AI-assisted programming tool, isn’t essential but can enhance productivity, Contractor says. “Similarly, MLflow excels in tracking experiments and sharing models, while tools like Labelbox simplify dataset labeling.” Both are valuable additions, but not required, he observes.  Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability When it comes to cloud services, Berry notes that tools such as AWS SageMaker, Google Cloud AI Platform, Google Colab, Google Playground, and Azure Machine Learning offer fully managed environments for building, training, and deploying machine learning models. “These platforms provide a range of automated tools like AutoML, which can help developers quickly create and tune models without deep expertise in every aspect of machine learning,” he says. “They are particularly valuable for developers who want to focus more on model development and less on infrastructure management.” Berry adds that these tools add value by streamlining processes, enhancing collaboration, and improving the overall user experience, even if they aren’t strictly required for all AI projects.  When it comes to scaling AI development at the enterprise level, organizations should look beyond disparate development tools to broader platforms that support the rapid adoption of specific AI use-cases from data through deployment, Ashoori advises. “These platforms can provide an intuitive and collaborative development experience, automation capabilities, and pre-built patterns that support developer frameworks and integrations with the broader IT stack.”  Related:Sidney Madison Prescott Discusses GenAI’s Potential to Transform Enterprise Operations Fading Away  As AI evolves and new tools arrive, several older offerings are falling out of favor. “Some libraries, such as NLTK and CoreNLP for natural language processing, are losing relevance and becoming obsolete due to innovations like generative AI and transformer models,” Contractor says.  “Once the go-to for data analysis, Pandas and NumPy, two popular Python libraries for data analysis and scientific computing, are losing adherents,” observes Yaroslav Kologryvov, co-founder of AI-powered business automation platform PLATMA via email. “Theano, replaced by TensorFlow and PyTorch, has suffered a similar fate.”  As AI development continues to advance rapidly, staying updated with the latest tools and frameworks is crucial for maintaining a competitive edge, Berry says. “While some older tools may still serve specific purposes, the shift toward more powerful, efficient solutions is clear,” he states. “Embracing innovations ensures that AI developers can tackle increasingly complex challenges with agility and precision.”  Adaptability and Streamlining  In the rapidly evolving AI universe, developers must maintain a high degree of adaptability, continuously reassessing and optimizing their toolsets, Contractor says. “As innovation accelerates, tools that are essential today may quickly become outdated, necessitating the adoption of new cutting-edge technologies and methodologies to enhance workflows and maximize project efficiency and effectiveness.”  To simplify and streamline the AI development experience, organizations should seek platforms that provide developers with optionality, customization and configurability at every layer of the AI stack, Ashoori concludes.  source

The Essential Tools Every AI Developer Needs Read More »

Beyond the Election: The Long Cybersecurity Fight vs Bad Actors

The outcome of the US presidential election will not be the end of cyberthreats from bad actors who might be backed by aggressor nation states. Geopolitical tensions will persist on the domestic and international stages with the potential for enterprises to be targets. Denial of service attacks, ransomware, and other forms of digital malice stand to be in play for the sake of political agendas, though money can play as much a role in hackers’ motivations as ideology.     Hacktavists and other bad actors backed by aggressor states will continue to be in play well after the election as geopolitical tensions continue. What types of organizations might find themselves to be targets (perhaps again) after the election? This episode of DOS Won’t Hunt brings together Carl Wearn, (upper left in video) head of threat intelligence analysis and future ops at Mimecast; Robert Johnston, (lower right) co-founder and CEO of Adlumin; Mike Wiacek, (lower center) CEO of Stairwell; Armaan Mahbod, (lower left) vice president of security and business Intelligence with DTEX Systems; and Adam Darrah, (upper center) vice president of Intelligence with ZeroFox. They discussed ways organizations might orient their cybersecurity and defenses for the post-election world, the prevalent types of attacks launched on behalf of aggressor states, and how the current cybersecurity infrastructure measures up to the potential threats that are in play. Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust Listen to the full podcast here. source

Beyond the Election: The Long Cybersecurity Fight vs Bad Actors Read More »

Harnessing Mainframe Data for AI-Driven Enterprise Analytics

“Harnessing Mainframe Data for AI-Driven Enterprise Analytics“ Tuesday, November 12, 2024 at 1:00pm EDT Did you know that 92% of IT leaders are actively investing in Artificial Intelligence (AI) to advance data and analytics initiatives, with an average of 5 projects either planned or ongoing? And despite the critical importance of mainframe data, only 28% of organizations report extensive use of such data for their analytical endeavors. These initiatives require data to fuel the AI and analytical models behind them, but where does mainframe data fit into the equation? Join us for a discussion on Rocket Software & Foundry’s research findings from our study, “The Role of AI and Mainframe Data in Enterprise Analytics.” This webinar is for Data & Analytics decision makers looking for knowledge of how to better integrate AI and mainframe data, overcoming prevalent challenges to unlock the full potential of enterprise analytics. Speakers: Ray SullivanVice President, Product ManagementRocket SoftwareRay Sullivan is the Vice President of Product Management for Rocket Software’s Data Modernization business. She has spent her career in Product Management, Product Marketing and Product Strategy in the enterprise software and consumer electronics industries. Ray drives Rocket’s strategy for the Structured Data portfolio, helping customers leverage and scale their data assets to deliver valuable business outcomes throughout their data modernization journeys. She also steers business and technical strategy to drive and expand technical partnerships. Lauren ZachariasDirector, Solution & Customer MarketingRocket SoftwareLauren Zacharias is the Director of Solution & Customer marketing for Rocket Software’s Data Modernization business. She has spent her career in Product Marketing, Customer Marketing, Content, and Product Development for B2B and B2C companies of different sizes in the Technology, Software, Retail, and Financial Services industries. Her specialty is creating strategic messaging for businesses that helps connect their product solutions with key buyers. Lauren has certifications and professional certificates from the Pragmatic Institute, Google, and the Product Marketing Alliance. Moderator:Peter Krass,InformationWeek  Offered Free by: Rocket Software See All Resources from: Rocket Software source

Harnessing Mainframe Data for AI-Driven Enterprise Analytics Read More »

How to Keep IT Up and Running During a Disaster

The United States experienced 28 disasters, including storms, flooding, tornadoes and a wildfire, that cost more than a billion dollars each in 2023, according to the National Oceanic and Atmospheric Administration (NOAA). And those were only the most expensive, weather-related events in one country. Around the world, natural disasters, including non-weather-related phenomena such as earthquakes and tsunamis, wreak havoc on human life and on infrastructure — including the IT that keeps life in the digital age running smoothly.   While the devastation caused by massive events understandably captures headlines, even relatively minor natural disasters such as large storms can affect IT operations. A 2024 report found that 52% of data center outages were the result of power failures. In the last decade, 83% of major power outages were weather-related. Even relatively minor storms can take out power lines.  Fourteen percent of respondents surveyed for InformationWeek’s 2024 Cyber Resilience Strategy Report said that their network accessibility had been disrupted by severe weather or a natural disaster. Sixteen percent ranked natural disasters as the single most significant event they had experienced.  Some businesses affected by natural disasters don’t survive in the first place: according to the Federal Emergency Management Agency, 43% of businesses never reopen and almost a third go out of business within two years. Loss of IT accessibility for nine days or more typically results in bankruptcy within one year.   Related:Why Businesses Need to Update Their DR Plan Now Only 23% of respondents to a survey on the effects of Hurricane Sandy in 2012 were prepared for the storm. Despite the increasing prevalence of weather-related events because of climate change, the US Chamber of Commerce Foundation found that only 26% of small businesses have a disaster plan in place as of this year, suggesting that few have planned for how their IT will be impacted.  Here, InformationWeek investigates strategies for keeping IT operational when disaster inevitably strikes, with insights from data center operator DataBank’s senior director of sustainability, Jenny Gerson, and industrial software company IFS’s chief technology officer for North America, Kevin Miller.    Preventing Damage to Infrastructure  Depending on the location of an IT facility and the natural disasters common to the region, any number of steps may need to be taken to prevent damage to essential physical IT components.   “We take into account all kinds of natural disasters when we’re looking at where to site a data center — we try to site it in the safest place we can,” Gerson says.  Related:Dust Bunnies on the Attack: Datacenter Maintenance Issues Pose Security Threats Jenny Gerson, DataBank In earthquake-prone regions, buildings need to be able to withstand temblors — additional reinforcements may be needed to prevent servers and wiring from being disrupted. Operators in areas prone to severe storms and hurricanes may need to both stormproof their buildings and ensure that essential equipment is located above ground level or in waterproof enclosures to avoid potential flood damage. Flood barriers may be advisable in some areas. Attention to potential mold damage after flooding may be necessary, as mold may create dangerous conditions for employees. And fire suppression systems may be able to mitigate damage before equipment is completely destroyed.  Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance.  But, Miller notes, IoT devices can sometimes fail. “We’re seeing customers who are starting to rely more on options like Starlink,” he says. “There’s no physical infrastructure other than a mini satellite dish that’s providing that connectivity — but [it offers the] ability for them to get data, feed it back, analyze it, and then make predictive assessments on what they should be doing.”  Related:Revisiting Disaster Recovery and Business Continuity Onsite generators, including sustainable onsite power plants using solar or wind, and microgrids can keep operations running even if access to the main grid is cut off. And redundancy in cooling is crucial for data centers as well.  “Should the utility go down, we have a seamless way to get to our generator backup so there are no blips in power,” Gerson claims. “We always have backup cooling systems.”  Creating Backups  Geodiversity can make or break IT operations during a natural disaster. While steps can be taken to protect operations, they may not always be sufficient to prevent interruption. If a data center or other IT operation is taken offline, the ability to switch over to a location in an unaffected area or to more dispersed, cloud-based operations, can be relatively seamless if proper planning is in place.  This type of redundancy requires careful implementation of regular backups — cloud technology makes this relatively efficient but hard backups may be useful as well. Setting shorter recovery point objectives, while potentially more expensive in the short-term, will likely make it easier to get things back up and running if an operation is taken offline by a disaster.  IoT devices may be helpful in recovering data that is not fully backed up. Many of these devices store data on their own before transmitting portions of it to the servers to which they are connected. In the case of a disaster, that stored information may be helpful in data restoration processes.  Regulatory Compliance  In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster.   “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.”  Being aware of deadlines for compliance reporting and being in contact with

How to Keep IT Up and Running During a Disaster Read More »