Breaking Down Human-Element Breaches To Improve Cybersecurity: FAQ

We are thrilled to announce our research, Deconstructing Human-Element Breaches (Forrester clients can access here), detailing the many and varied risks posed by and to humans — problems that have plagued cybersecurity teams for decades. Forrester clients can use this research as a catalyst for productive conversations with executives and peers across functions about controls to mitigate the human-element breach types most common to their organizations and industries. This blog includes an FAQ based on the most common questions we receive from our clients and the security vendor community about human-element or human-related breaches. Aren’t human-element breaches just social engineering and human error? Whenever we mention human-related breaches, security and risk leaders and practitioners typically think of two main categories: social engineering and human error. This isn’t wrong but isn’t the full picture. After covering these topics separately for years, we decided to deconstruct the problem of human element breaches to uncover what they are and how to address them. This includes a variety of categories such as security culture, social engineering (including phishing), and insider risk. How do I use Forrester’s wheel of human-element breaches? As part of the research, we deconstructed eight breach families containing 25 human-element breach types (see figure below). They include established and emerging attacks such as social engineering, data exfiltration by insiders, and just plain human error. Attackers target humans in so many different ways, and humans behave in such distinct ways that leave them and their organizations vulnerable to attacks. Security leaders can use this wheel to assess the breach types that pose the most risk to their organization, define and describe each breach to stakeholders, and gain buy-in for investment to mitigate these risks. Why do we need this clarity? While it’s great that human-centered security is becoming more top of mind, human-related breaches remain inconsistently defined. For example, well-respected sources, such as the annual Verizon Data Breach Investigations Report, the European Union Agency for Cybersecurity, and the Office of the Australian Information Commissioner’s notifiable data breach reports, each provide different perspectives of what constitutes human-related breaches. This confusion can lead organizations to focus on common breaches while ignoring others, limit the solutions to well-trodden yet ineffective recommendations such as security awareness and training (SA&T), or worse, bury their heads in the sand, overfocusing on technology and not people. Can’t you just train people? After all, this is “just” a human issue. According to Forrester data, 97% of organizations conduct some form of SA&T — hoping for a silver bullet while checking a regulatory compliance box. Despite this, human-related attacks such as business email compromise have quadrupled, CISOs haven’t instilled security cultures in their organizations, training continues to cause friction for learners, and no one knows what behaviors actually change. While awareness of security issues is important, it can never replace the role of technical controls. Even the most vigilant employee will fall for a credible phishing lure or deepfake voice call, accidentally misconfigure an API setting, or send a sensitive file to the wrong recipient. Training is not enough. Technical controls must be in place to protect users from these attacks and change their behavior. If training isn’t as effective as you say it is, can’t we just use tech? While some breaches, such as those caused by human error or social engineering, are easy to associate with people, others that are technologically heavy, such as generative AI (genAI) misuse, are a bit more difficult to understand. Yet it was people relying on fallible genAI content that led the Australian Federal Parliament to publish an inaccurate submission. Without understanding that this is a human-related issue, it is easy to try to rely solely on technology to solve the problem. Security leaders need to strike a balance between training and technical controls. We provide guidance on how to do so using Forrester’s Human-Element Breach Control Matrix. I keep hearing about human risk management, but isn’t it just SA&T 2.0? Far from being SA&T with a fancy new name, human risk management (HRM) solutions present a significant change of mindset, strategy, process, and technology. Forrester defined HRM and began evaluating HRM vendors, encouraging orgs to positively influence security behaviors through evidence-based detection and anticipation of human risk, instead of purely relying on training. Do we really need another tool to manage the human risk? While some technologies in your tech stack provide limited behavioral insights, HRM is unique in that its sole focus is human risk. It integrates with existing tools and technology to measure a vast range of security behaviors and provides a comprehensive view of human risk. HRM also correlates behavioral, threat, access, and knowledge data to surface previously unseen risks. It interacts with people through a set of interventions including training but also through policy updates to protect people in a way that requires minimal effort on their part. Talk To Us Forrester clients can schedule a guidance session or inquiry with: Jinan Budge, for human-centered security, security culture, influence and engagement, and human risk management. Jess Burn, for social engineering and email, messaging, and collaboration security solutions. Joseph Blankenship, for insider risk. Heidi Shey, for data security. Any one of the contributors to this research to discuss the entirety of human-related breaches. source

Breaking Down Human-Element Breaches To Improve Cybersecurity: FAQ Read More »

Air traffic control for drones in sight for Norwegian startup AirDodge

Remember when spotting a drone in the sky was a novelty? Now it’s like playing whack-a-mole with flying machines. Delivery drones, military drones, AI drones, hobby drones — our skies are busier than the queue at airport security. Without air traffic control, we’re one step away from midair collisions and drones arguing over parking spots.  Enter AirDodge, a Norwegian startup that’s stepping in to tame the chaos. The Oslo-based company just secured a $500,000 pre-seed funding round, led by VC firms Nordic Makers and Antler. The investment will help AirDodge develop its U-Space software platform, designed to manage large-scale drone operations across Europe. “At AirDodge, we envision a future where drones seamlessly integrate into the airspace, contributing positively to various industries while ensuring safety and compliance,” said Umar Chughtai, who founded AirDodge in 2022. “This funding will allow us to accelerate the development of our U-Space platform, bringing us closer to realising that vision.”  The AirDodge platform provides a real-time map of drone activity and aims to simplify the process of obtaining flight permissions. The tech aligns with the EU’s U-Space standards which are “designed to provide safe, efficient and secure access to airspace for large numbers of unmanned aircraft, operating automatically and beyond visual line of sight.”   In 2018, London’s Gatwick airport was forced to shut down after drones were spotted flying near the runway. The incident affected around 1,000 flights and 140,000 passengers. Many similar incidents have occurred over the years, from Stockholm to Frankfurt.  If AirDodge’s tech had been around during the Gatwick fiasco, it could’ve spotted the rogue drones in real-time, flagged them faster than airport security can confiscate a water bottle, and perhaps kept flights running smoothly. By enforcing no-fly zones and syncing drones with air traffic control, the platform might have saved 140,000 passengers a lot of headaches (and missed connections).  “Drone technology has the potential to have a positive impact on society, business and public services, but there is not yet a way to guarantee safety,” said Kristian Jul Røsjø, partner at Antler. “High-profile disruptions are hindering the development of this technology and AirDodge will provide a much-needed solution.”   AirDodge will use the pre-seed funding to accelerate the development of its platform. The company aims to launch the alpha version in mid-2025.  Across the EU, the market for drone services is soaring. One projection valuing it at €14.5bn by 2030, bringing in 145,000 new jobs. But as drones proliferate, so do the challenges.    “We have the drones, but we lack the infrastructure,” said Nima Tisdall, partner at Nordic Makers. “In this case, the infrastructure is not roads, plumbing, or electrical wires; but rather ethereal communication systems.” Tisdall added that Airdodge had unusual strengths for the region. “The founding team is forceful and ambitious — qualities that can be surprisingly rare to find in Nordic entrepreneurs, but integral in building a category-winning business,” she said. “We’re excited to be supporting a local player who can help unlock the large-scale adoption of drones across Europe.” source

Air traffic control for drones in sight for Norwegian startup AirDodge Read More »

Former Google, Meta leaders launch Palona AI, bringing personalized, emotive customer agents to non-techie enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Speaking for myself, interacting with any merchant’s AI-powered chatbot on their website is often an exercise in frustration. Phone trees with robot voices are typically worse. I’d wager I’m hardly alone in my assessment. Who amongst us hasn’t experienced long hold times, slow responses, lack of updated information and awareness of the customer’s own account history, a granting faux politeness and a host of other inefficiencies? A new startup called Palona debuted last week that aims to fix this sorry state of affairs. It equips direct-to-consumer enterprises — think pizza shops and electronics vendors — with live, 24/7 customer support sales agents that are uniquely reflective of each business’s brand personality, voice, inventory stock and value proposition. The electronics vendor has a “wizard” agent made by Palona, while the pizza shop gets a surfer dude agent personality — naturally. In all cases, Palona focuses on creating AI agents that have high “EQ,” or “emotional intelligence/emotional quotient,” building them from a combination of open source and proprietary AI models and training some of their own using human sociology research. “A kind of fundamental thesis is that we can create an experience that is delightful and feels genuine, like a real human conversation,” Palona co-founder and CTO Tim Howes, said in an in-person interview with VentureBeat. “ChatGPT is a hugely useful tool, but it does not feel like a human conversation.” Palona claims its system can be easily implemented by a non-techie brand on their website, mobile app or phone lines — with responses uniquely tailored to each brand and each communications environment. And, in fact, its agents are already at work handling orders, answering questions and complaints and suggesting products and upsells to customers. Strong founding background In addition to Howes, Palona was co-founded and is led by a team of engineers from some of the top tech companies in the world, among them: Maria Zhang, Palona’s CEO, is a former VP of engineering at Google, VP/GM of AI for products at Meta and CTO of Tinder. She also founded Alike, which was acquired by Yahoo in 2013. Palona’s chief scientist Steve Liu was formerly chief scientist at Samsung AI Center and Tinder. A tenured professor at McGill University, Liu is also a Fellow of IEEE and the Canadian Academy of Engineering, with more than 390 research papers to his name. And, Howes himself is the co-inventor of the industry-standard, open source Lightweight Directory Access Protocol (LDAP) online data storage system, as well as co-founder of LoudCloud and OpsWare (the latter was acquired by HP for $1.65 billion). He was also previously the CTO at Netscape, HP Software and led developer productivity at Meta’s AI infrastructure business. “We’re building fully autonomous sales agents — not tools for salespeople, but actual AI salespeople,” said Zhang, adding that AI will be “the employee of the century.” 24/7 polite, distinct, personable sales agents Palona AI positions itself as a solution for companies looking to improve their sales performance, customer engagement and brand loyalty. The Palona agents act as customized virtual sales employees, combining soft sales skills with 24/7 availability, unlimited capacity and advanced memory recall, and can interact with customers in an online chatbot, an SMS/text or AI-powered voices. “100% — we support voice,” Zhang explained. “For example, in pizza ordering, voice is still a major user pattern. In the Midwest, about 50% of people still call to order. On the east and west coasts, it’s around 20%, but it’s still significant.” Palona’s voices are licensed, but the company has the ability to train and deploy custom ones — even voice clones of authorized customer reps or a CEO. The company realized through testing that the voice version of Palona’s AI sales agents would need to have distinctly different interaction styles from the text chatbot. “We tested different voice interactions, and for pizza ordering, for example, customers wanted efficiency,” Zhang related. “They didn’t want a chatty AI — they just wanted to get their order done as fast as possible. So we optimized for that, making it have less personality, less verbosity, more efficiency.” Unlike traditional chatbots that serve as assistants to human representatives, Palona AI is designed to handle entire sales cycles without human intervention. “There’s a big gap between lifelike AI models like ChatGPT and what businesses actually need — an AI agent that can fully sell, convert, and upsell,” Zhang explained. Palona claims to minimize errors and reduces AI hallucinations by up to 98%, ensuring reliable interactions. Zhang and Howes said that for even the most analog businesses, it takes just a short lead team to get going, and only several days for a simple implementation. Customers provide Palona with “FAQs, employee training manuals, policies and procedures,” said Howes. Then, they define what actions the agent should take — be it processing orders, answering inquiries or handling support issues. One of the biggest factors affecting setup time: How much integration is required with the customer’s existing systems (point of sale, customer reltionship management, ordering platforms). If we already support their system, it’s plug-and-play,” Howes explained. “The agent can be ready in a couple of days. If they’re using a new, unfamiliar system, that requires additional engineering work, which could take longer.” In addition, Zhang said that Palona was “actually in the process of automating agent setup. Eventually, businesses will be able to use a Palona agent to configure their own Palona agent.” Three language models are better than one Palona achieves all this by combining three different models. The first is a custom, fine-tuned large language model (LLM) that serves as the basis for every distinct business sales agent — the pizza shop gets a different tone and personality from the electronics vendor, and each one is customized out of the box. There’s also a supervisory model that detects, catches and removes hallucinations from the main model before it outputs them

Former Google, Meta leaders launch Palona AI, bringing personalized, emotive customer agents to non-techie enterprises Read More »

Beyond benchmarks: How DeepSeek-R1 and o1 perform on real-world tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More DeepSeek-R1 has surely created a lot of excitement and concern, especially for OpenAI’s rival model o1. So, we put them to test in a side-by-side comparison on a few simple data analysis and market research tasks.  To put the models on equal footing, we used Perplexity Pro Search, which now supports both o1 and R1. Our goal was to look beyond benchmarks and see if the models can actually perform ad hoc tasks that require gathering information from the web, picking out the right pieces of data and performing simple tasks that would require substantial manual effort.  Both models are impressive but make errors when the prompts lack specificity. o1 is slightly better at reasoning tasks but R1’s transparency gives it an edge in cases (and there will be quite a few) where it makes mistakes. Here is a breakdown of a few of our experiments and the links to the Perplexity pages where you can review the results yourself. Calculating returns on investments from the web Our first test gauged whether models could calculate returns on investment (ROI). We considered a scenario where the user has invested $140 in the Magnificent Seven (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, Tesla) on the first day of every month from January to December 2024. We asked the model to calculate the value of the portfolio at the current date. To accomplish this task, the model would have to pull Mag 7 price information for the first day of each month, split the monthly investment evenly across the stocks ($20 per stock), sum them up and calculate the portfolio value according to the value of the stocks on the current date. In this task, both models failed. o1 returned a list of stock prices for January 2024 and January 2025 along with a formula to calculate the portfolio value. However, it failed to calculate the correct values and basically said that there would be no ROI. On the other hand, R1 made the mistake of only investing in January 2024 and calculating the returns for January 2025. o1’s reasoning trace does not provide enough information However, what was interesting was the models’ reasoning process. While o1 did not provide much details on how it had reached its results, R1’s reasoning traced showed that it did not have the correct information because Perplexity’s retrieval engine had failed to obtain the monthly data for stock prices (many retrieval-augmented generation applications fail not because of the model lack of abilities but because of bad retrieval). This proved to be an important bit of feedback that led us to the next experiment. The R1 reasoning trace reveals that it is missing information Reasoning over file content We decided to run the same experiment as before, but instead of prompting the model to retrieve the information from the web, we decided to provide it in a text file. For this, we copy-pasted stock monthly data for each stock from Yahoo! Finance into a text file and gave it to the model. The file contained the name of each stock plus the HTML table that contained the price for the first day of each month from January to December 2024 and the last recorded price. The data was not cleaned to reduce the manual effort and test whether the model could pick the right parts from the data. Again, both models failed to provide the right answer. o1 seemed to have extracted the data from the file, but suggested the calculation be done manually in a tool like Excel. The reasoning trace was very vague and did not contain any useful information to troubleshoot the model. R1 also failed and didn’t provide an answer, but the reasoning trace contained a lot of useful information. For example, it was clear that the model had correctly parsed the HTML data for each stock and was able to extract the correct information. It had also been able to do the month-by-month calculation of investments, sum them and calculate the final value according to the latest stock price in the table. However, that final value remained in its reasoning chain and failed to make it into the final answer. The model had also been confounded by a row in the Nvidia chart that had marked the company’s 10:1 stock split on June 10, 2024, and ended up miscalculating the final value of the portfolio.  R1 hid the results in its reasoning trace along with information about where it went wrong Again, the real differentiator was not the result itself, but the ability to investigate how the model arrived at its response. In this case, R1 provided us with a better experience, allowing us to understand the model’s limitations and how we can reformulate our prompt and format our data to get better results in the future. Comparing data over the web Another experiment we carried out required the model to compare the stats of four leading NBA centers and determine which one had the best improvement in field goal percentage (FG%) from the 2022/2023 to the 2023/2024 seasons. This task required the model to do multi-step reasoning over different data points. The catch in the prompt was that it included Victor Wembanyama, who just entered the league as a rookie in 2023. The retrieval for this prompt was much easier, since player stats are widely reported on the web and are usually included in their Wikipedia and NBA profiles. Both models answered correctly (it’s Giannis in case you were curious), although depending on the sources they used, their figures were a bit different. However, they did not realize that Wemby did not qualify for the comparison and gathered other stats from his time in the European league. In its answer, R1 provided a better breakdown of the results with a comparison table along with links to the sources it used for its answer.

Beyond benchmarks: How DeepSeek-R1 and o1 perform on real-world tasks Read More »

Tidal Wave of Trump Policy Changes Comes for the Tech Space

Within his first week in office, President Trump signed a flood of wide-ranging executive orders and took actions that have significant implications for the technology industry. The sheer volume of change, along with the freezing and unfreezing of federal funding, sparked much confusion.   What are some of the biggest tech policy changes coming from the current administration, and what could they mean for the industry?   A New AI Order   Trump voiced plans to repeal Biden’s executive order on AI, arguing that it stifles innovation, and swiftly followed through. He signed an executive order — Removing Barriers to American Leadership in Artificial Intelligence — and also announced plans for Stargate, a $500 billion AI infrastructure initiative.  “The Stargate initiative is interesting as it aligns several large players in the space into a single entity to help push initiatives,” Max Shier, vice president and CISO at Optiv, a cybersecurity advisory services company, tells InformationWeek via email.   While those moves have big possibilities for the AI space, it will likely take time to see the effects.   “Most of the executives that I’m talking to now don’t feel like there’s a huge impact, at least right now.  And they’re continuing to make investments and pursue solutions as they were … in Q4 or second half of last year, and in fact investing, even more in those solutions from an AI perspective,” Bill Farmer, lead of the aerospace, defense, and government services investment banking team and managing director at Brown Gibbons Lang & Company, an investment bank and financial advisory firm, tells InformationWeek.   Related:What’s New (And Worrisome) in Quantum Security? With Chinese startup DeepSeek making strides, competition in the global race for AI market dominance is heating up. But there are still concerns over risk in the AI space. Several industry and consumer groups signed a letter calling for the White House to retain AI testing and transparency rules, CNBC reports.   “The removal of guardrails and oversight can be negative if tech companies are allowed to do whatever they want without ethical considerations guiding their conscience,” says Shier.   Cybersecurity Changes  The Trump administration fired the Cybersecurity and Infrastructure Security Agency’s (CISA) Cyber Safety Review Board (CSRB). The CSRB was investigating China-state backed APT group Salt Typhoon, the group responsible for a massive breach of US telecom companies.   “I could also see potentially a pullback in CISA’s authority and role. Their budget has been increasing year over year. At a minimum, I think they’re going to take a hard look at what those programs are and what that spending looks like,” says Deniece Peterson, senior director of federal market analysis at Deltek, an enterprise software and information solutions company.   Related:Data Thefts: Indecent Exposure to Risk During his campaign, Trump was vocal about his intentions to be tough on China, but he seems to be taking a more nuanced approach now that he has taken office, AP News reports.  What that means for the federal government’s approach to cyber threats from China remains unclear.   Peterson points out that Trump has established the President’s Council of Advisors on Science and Technology (PCAST). “That may incorporate some of those [cybersecurity] activities. We just don’t know yet,” she says.   DOGE and Government Spending  The Department of Government Efficiency (DOGE) is going to focus on “modernizing federal technology and software to maximize efficiency and productivity,” according to the executive order establishing the new department.   “Particularly the government services folks are extremely nervous about DOGE … [it is] looking at reducing government spending, looking at reducing services, looking at reducing contractors,” Farmer points out.   Related:Infogram Test Trump has also spoken about rescoping and even eliminating entire federal departments. How that will actually play out under this administration remains to be seen, but it could result in workforce reductions.   “The Trump administration is going to be looking at automation of certain functions,” says Peterson.   Workforce reductions and increased automation could mean opportunities for IT companies to vie for government contracts. “IT contractors are looking at this is about how can they support this new kind of environment and shift,” Peterson adds.   A Step Back from Regulation  Trump signed an executive order placing a regulatory freeze on federal agencies. This administration has made clear its plans for deregulation.   “His moves are not unexpected. He was very clear on what he wanted to achieve in the tech space and that is less restrictions on tech companies and more innovation,” says Shier.  One potential result of a lighter-touch approach to regulation could be more M&A activity in the tech industry.  “I think a lot of deals were shelved in the last three or four years that had the potential to be significant transactions but because of the regulatory risk, folks decided not to pursue those,” says Farmer. “That’s changed now. At least optically, people feel like there’s a higher chance that deals could get through.”  Early Days  It is still early days for the second Trump administration. Many of his executive orders are facing legal pushback, and the impact of the president’s actions are not readily apparent in many cases.   “It’s hard to figure out how the executive orders to date impact spending because there’s been a lot of confusion. There’s … a lack of clarity on what the scope is, what kind of spending these things apply to,” says Peterson.   Technology industry stakeholders will have to watch how these initial policy changes play out and prepare for the possibility of more.   “I will be watching whether they continue down the path of de-regulation and how it affects the use and consumption of tech, including AI and privacy,” says Shier.   In the cybersecurity space, the fate of the Cybersecurity Maturity Model Certification (CMMC) program is also something to watch. “CMMC has seen a significant amount of pushback by companies doing business with the government as they state it is too expensive to implement and would reduce the competition in the government contractor space,” says Shier.   Technology leaders will be keeping a close eye on federal tech policy changes

Tidal Wave of Trump Policy Changes Comes for the Tech Space Read More »

January's IPO Market Was Active Despite Tepid Debuts

By Tom Zanki ( January 31, 2025, 9:06 PM EST) — Capital markets lawyers kept busy in January thanks to a sizable increase in initial public offerings, but the largest IPOs performed weaker than expected, likely sobering market participants’ expectations going forward…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

January's IPO Market Was Active Despite Tepid Debuts Read More »

Top 8 Penetration Testing Tools to Enhance Your Security

As technology advances, ensuring the security of computer systems, networks, and applications becomes increasingly critical. One of the ways in which security professionals can assess the security posture of an entire digital ecosystem is by carrying out penetration testing, or pen testing for short. Penetration testing is the authorized simulation of a real-world cyber-attack. This allows organizations to evaluate how strong their security systems are and identify what weaknesses or vulnerabilities are present, if any. According to research by SNS Insider, the penetration testing market is expected to reach $6.98 billion in value by 2032, largely due to the continued advancement of cybersecurity threats. As a fundamental practice for assessing an organization’s security posture, pentests involve both the expertise of experienced security professionals and the use of powerful penetration testing tools. Given the proliferation of these tools, I have come up with a list of the top penetration testing tools available with their features, benefits, and drawbacks. ESET PROTECT Advanced Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Advanced Threat Defense, Full Disk Encryption , Modern Endpoint Protection, and more ManageEngine Log360 Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees) Micro, Small, Medium, Large, Enterprise Features Activity Monitoring, Blacklisting, Dashboard, and more Penetration testing software comparison table Here is a feature comparison of our shortlisted pen testing tools and how they stack up against each other. Compliance checks Number of tests covered Open-source / web-based Reporting and documentation Starting price Astra Yes 8,000+ Web Yes $1,999 per year, one target Acunetix No 7,000+ Web Yes Contact for quote. Intruder Yes Not specified Web Yes $157 per month, one application Metasploit Yes 1,500+ Both No Contact for quote. Core Impact Yes Not specified Web Yes $9,450 per user, user per year. Kali Linux Yes Not specified Open-source Yes Completely free Wire Shark No Not specified Open-source Yes Completely free SQL Map No Not specified Open-source Yes Completely free Astra: Best for diverse infrastructure Image: Astra Astra is a penetration testing tool solution with several automated testing features that combine manual with automated penetration testing features for applications, networks, API, and blockchain. With over 8,000 tests supported, this tool can help security professionals investigate vulnerabilities within a system. Astra covers different types of penetration testing, including web app pentest, cloud security pentest, and mobile app pentest. As a comprehensive penetration testing solution, Astra covers many tests that can help organizations meet compliance standards. Some of the compliance standards that Astra can check include SOC2, GDPR, and ISO 27001. The Astra tool also integrates with GitLab, Jira, and Slack and infuses security into a continuous integration/continuous deployment (CI/CD) pipeline. Why I picked Astra I picked Astra for its Enterprise Web App subscription, which accommodates different infrastructures. In particular, it can be used on web, mobile, cloud, and network infrastructures, offering multiple targets across various asset types. This is on top of Astra’s 8,000+ available tests and its wide range of integrations with other popular software. Pricing Astra’s pricing is categorized into web app, mobile app, and AWS cloud security, each with different pricing. Web app: Scanner – $1,999/year, Pentest – $5,999/year, Enterprise – $9,999/year. Mobile: Pentest – $2,499/year and Enterprise – $3,999/year. AWS cloud security: Under this are the Basic and Elite plans, and both require users to speak to the sales team for a quote. Features Covers 8,000+ tests scanning. Covers all tests required for ISO 27001, HIPAA, SOC2, and GDPR. Integration with GitLab, GitHub, Slack, and Jira. PWA/SPAs apps scanning support. Support through Slack and Microsoft Teams. Astra’s pentest dashboard. Image: Astra Integrations Slack workspaces. Jira. GitHub. GitLab. Azure. CircleCI. Astra pros and cons Pros Cons Supports publicly verifiable pentest certificates, which can be shared with users. What is supposed to be a free trial is charged at $1 per day. Offers one of the widest testing coverages (over 8,000). Support via Slack and MS Teams is only available on the Enterprise plan. Tests are automated with AI/ML. Support via Slack or Microsoft Teams. Acunetix: Best for pentest automation Image: Acuntetix Acunetix by Invicti is a powerful pen-testing tool for web applications. The solution is packed with scanning utilities that can help penetration test teams quickly get an insight into over 7,000 web application vulnerabilities and provide a detailed report covering the scope of vulnerability. Some of the notable vulnerabilities Acunetix can detect include XSS, SQL injections, exposed databases, out-of-band vulnerabilities, and misconfigurations. Acunetix comes with a dashboard that can sort vulnerabilities into classes, such as critical, high, medium,  and low. The tool is written in C++ and can run on Microsoft Windows, Linux, macOS, and the cloud. Why I picked Acunetix For businesses specifically looking for automated pentesting, I like Acunetix. It offers scheduled or recurring application scans, includes over 7,000 vulnerability tests, and generates useful insights before a scan is halfway through. I imagine it to be a great solution for organizations that want a no-nonsense pentest tool that saves them time without sacrificing overall security. Pricing Contact Acunetix for a quote. Features Vulnerability categorization into an order of severity. Over 7,000 web app vulnerabilities are supported. Covers the OWASP Top 10 standard for developers and web application security. Scan scheduling functionality. Compatibility with issue-tracking tools like Jira and GitLab. Acunetix scan result classification dashboard. Image: Acnetix Integrations Jira. Azure DevOps. GitHub. GitLab. Bugzilla. Mantis. Acunetix pros and cons Pros Cons Detected vulnerabilities are classified according to their severity level. No pricing details for users. Supports reporting and documentation. Absence of a free trial. Over 7,000 vulnerability tests are a broad coverage. Users can schedule one-time or recurring scans. Supports concurrent scanning of multiple environments. Features Cloud vulnerability scanning. Web vulnerability scanning. API vulnerability scanning. Compliance and reporting features. Internal and external vulnerability scanning. Intruder main dashboard. Image: Intruder Integrations Amazon Web Services (AWS).

Top 8 Penetration Testing Tools to Enhance Your Security Read More »

How Must Staffing Change in Relation to AI?

Debate continues over how artificial intelligence might upend current jobs and future careers, as nuances emerge in such discussions. The assumption that AI equals immediate job cuts to deliver efficiency might not be that simple, especially as more divisions within organizations and their leadership start to understand how they can leverage this technology. Certain jobs might be eliminated, yet other jobs could evolve with AI. This episode of DOS Won’t Hunt features Luke Behnke, vice president of product for Grammarly; Cliff Jurkiewicz, vice president of global strategy for Phenom; Ryan Bergstrom, chief product and technology officer for Paycor; Daniel Avancini, co-founder and chief data officer for Indicium; and Arun Varadarajan, co-founder and chief commercial officer for Ascendion. They discussed how AI already changes staffing, what skillsets organizations want in an AI-powered world, fears about job loss, what this may mean for executives in the C-suite who need to get up to speed on AI, and when organizations can comfortably rely on AI to enhance their workforce. Listen to the full podcast here. source

How Must Staffing Change in Relation to AI? Read More »

Latham Guides $50M Bitcoin Mining Data Center Investment

By Isaac Monterose ( January 31, 2025, 7:18 PM EST) — Cipher Mining Inc., a data center company that focuses on bitcoin mining, announced a $50 million investment from SoftBank Corp. for the development of high-performance computing data centers in a deal guided by Latham & Watkins LLP…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Latham Guides $50M Bitcoin Mining Data Center Investment Read More »

Speech-to-Speech AI: Empowering a More Connected World

From automating complex tasks to providing deep insights through data analysis, artificial intelligence has reshaped the way businesses operate and compete in a global marketplace. Yet, we are still in the early stages, with new AI advancements emerging regularly, each promising to push the boundaries of what’s possible.   One of the most recent advancements is in the development of speech-to-speech AI technology, which is set to facilitate and enhance communication on an unprecedented scale. By enabling real-time voice translation and voice-based interactions with AI agents, speech-to-speech AI is poised to break down language barriers, streamline operations, and foster a more connected global economy.   The Architecture of Speech AI and Advancements  The term “speech-to-speech” might suggest a direct conversion of spoken language, but the reality is a more complex, multi-layered process. Today’s speech AI systems operate through a sophisticated three-step workflow:  Speech-to-Text (STT): The process begins by capturing voice input, which is then transformed into mel-spectrograms — a visual representation of the sound’s frequency content over time. Advanced neural networks, such as those used in models like OpenAI’s Whisper, apply deep learning techniques to these spectrograms, enabling automatic speech recognition (ASR). The neural network analyzes the spectrograms to convert the audio signal into text. This deep learning approach allows the system to transcribe speech with high precision, providing the foundation for subsequent processing tasks.  Text-to-Text (TTT): Once the speech is converted into text, it’s processed by powerful natural language models like GPT-4. This stage involves understanding the context, translating languages if needed, and generating appropriate responses. It’s the cognitive core of the system, where raw input text is turned into a meaningful output.  Text-to-Speech (TTS): Finally, the processed text is converted back into spoken words. This involves generating new mel-spectrograms that represent the speech, which are then converted into high-quality audio using advanced vocoder models. Startups, as well as industry giants like Google and Amazon, are at the forefront of this technology, producing voices that are nearly indistinguishable from human speech.  Related:How AI Can Help (Or Deceive) Gamblers Academic Advancements in Speech AI Although speech recognition systems have been around since the 1950s, a significant breakthrough came in 2014 with Baidu’s pioneering research. Led by Andrew Ng, the team introduced deep learning methods to ASR, fundamentally reshaping the design and implementation of these systems.  Related:Exploring the Positive Impacts of AI for Social Equity Building on these advancements, companies like OpenAI have pushed the envelope further. OpenAI’s Whisper, released in September 2022, stands at the forefront of speech AI models. As an open-source model, Whisper has not only set new standards for accuracy and versatility but has also spurred the growth of speech AI companies that leverage its capabilities to develop human-like conversational systems.  Today’s speech-to-text models can closely replicate the intonation, emotion and cadence of human voices, with companies like Eleven Labs — now valued at over $1 billion — leading the charge. The convergence of these advancements has led to the development of sophisticated speech AI systems like OpenAI’s “advanced voice mode.” With its recent rollout to paying users, we are beginning to see the real-world applications of this powerful technology.   Transformative Use Cases Speech-to-speech AI holds immense potential across various applications, including enhancing accessibility for individuals with vision impairments and bridging language gaps in global business, including:  Empowering individuals with vision impairments: Historically, individuals with blindness and vision loss — numbering over 1.1 billion globally — have faced barriers in knowledge-based roles due to reliance on visual data and text-heavy interfaces. Speech-to-speech AI, combined with computer vision technology, is changing how these individuals interact with both physical and digital environments. For example, Be My Eyes uses GPT-4o alongside computer vision to provide real-time audio descriptions of visual surroundings, like iconic landmarks, enhancing the user’s spatial awareness.   Related:China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge Bridging language gaps in global business: On a global scale, with more than 7,000 languages spoken worldwide, speech-to-speech AI is breaking down language barriers that have traditionally hindered international trade and collaboration. Real-time translation capabilities enable seamless communication across different languages, fostering trust and cooperation among global partners. For instance, a business executive in Tokyo can now engage in smooth, multilingual meetings with colleagues in São Paulo, overcoming linguistic obstacles and enhancing global business operations.   The Future of Speech-to-Speech AI  We are on the cusp of a major shift in speech-to-speech technology. Recent advancements are pushing the boundaries by developing unified models that move beyond the traditional three-layer approach, speech-to-text, text-to-text, and text-to-speech. Researchers are exploring direct speech-to-speech systems that bypass text altogether, aiming to reduce latency and enhance the fluidity of translations. These innovations promise to make interactions with AI more seamless and intuitive. In the near term, such developments will significantly improve conversational experiences, while future advancements may address challenges like real-time interruptions and dynamic query changes, with startups already exploring ways to pause and redirect AI processing in more natural and responsive ways.  Moving forward, the key will be to ensure that these innovations are accessible to all and that their benefits are equitably distributed. By doing so, we can harness the power of speech-to-speech AI not just to enhance productivity and economic growth, but to build a more inclusive and connected global community.  source

Speech-to-Speech AI: Empowering a More Connected World Read More »