4-Step guide to avoid e-invoicing penalties around the globe

Governments all over the world are pushing forcefully for unified tax collection and paperless data exchange. One of the biggest shifts is related to the e-invoicing regulations, which are becoming obligatory in an increasing number of countries.  Why are so many countries going paperless? The main benefits of embracing e-invoicing include: Reduced tax fraud and increase revenue for governments Cost optimization, given e-invoicing can cut costs by as much as 60% to 80% compared to paper invoices Facilitate automation Minimize risk of human errors by eliminating manual tax reporting Environmental concerns However, the process of moving towards electronic invoices is quite complicated and lengthy, with different countries embracing different formats, mandates, integrations with governmental platforms, and – most of all – dates for when the new approach to data exchange will become mandatory.  Staying on top of these changes is crucial, as not abiding by the new rules may pose a serious problem – not only due to disrupted communication with your clients and partners but also because of the penalties that certain countries plan to impose on taxpayers who issue incorrect invoices. How to avoid e-invoicing penalties  Due to the number of different e-invoicing mandates, governmental guidelines, and potential fines, there’s unfortunately no “one size fits all” approach to avoiding e-invoicing penalties. But these tips can help your organization reduce the likelihood of e-invoicing non-compliance – and adopt some optimal solutions going forward. 1. Generate your e-invoices on time One of the more universal penalty instigators is late e-invoice generation. Understand the country or countries in which your invoicing mandates apply – and generate your e-invoices as soon as feasibly possible after producing a taxable supply. 2. Ensure e-invoicing accuracy Whether it’s from negligence or on purpose, governments will treat inaccuracies the same. While many countries may have a “slap on the wrist” policy for first-time offenses, these can still be costly errors. Thoroughly review e-invoices for accuracy, and make sure the format and information are in accordance with the governing body. 3. Stay ahead of potential changes E-invoicing mandates are on the rise, and governments around the globe are quickly adapting to digitalization and modernization to reduce their indirect tax (e.g. VAT) gaps. Be aware of upcoming changes and give your organization time to adapt and adjust your systems to remain compliant. 4. Utilize dependable e-invoicing partners As mandate adoption continues to spread, organizations (especially those with a global footprint) will find e-invoicing to be a massive headache. The growing complexities will only lead to increased penalties. Take a proactive approach to the global shift in e-invoicing requirements by choosing the right platform and getting ahead of upcoming regulations.  Interested in learning more about the global landscape of e-invoicing mandates? Download the new white paper, “Mandatory e-Invoicing Penalties Around the World (And How to Avoid Them),” available for free HERE. source

4-Step guide to avoid e-invoicing penalties around the globe Read More »

Self-Driving Car Co. Waymo Snags $5.6B In Series C Funding

By Jade Martinez-Pogue ( October 25, 2024, 4:47 PM EDT) — Self-driving car company Waymo on Friday announced that it closed a behemoth investing round, raising $5.6 billion from private equity and venture capital investors…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Self-Driving Car Co. Waymo Snags $5.6B In Series C Funding Read More »

Predictions 2025: AI’s Mishaps And Patchy Rules Lead To Uneven Pockets Of Trust

2024 saw AI mistakes dominate the headlines as bad actors spread wide across the world, from the anticompetitive cases reigning in big tech to electoral manipulation and disinformation on social media. As AI and emerging technology mistakes loom over headlines and companies’ shoulders, organizations are building the plane as they fly. Executives are learning new lessons about AI and tech responsibility on a daily basis, all in the name of earning consumer and customer trust. But not all countries have the same (or any) rules for AI, which means that not all companies are playing the same game. Patchy AI standards and regulations across the globe will result in some organizations faring better than others when it comes to building and maintaining trust. In 2025, as leaders attempt to navigate these uneven pockets of trust, Forrester predicts that: Trust in government will rise 10% post-2024 elections — briefly. With several major political elections in 2024, almost half of the world population will have voted in a national election. Disinformation and social manipulation by foreign state-based actors have fueled distrust in democratic institutions — in nine of 11 G20 countries we surveyed, more online adults reported they do not trust that the government will follow through on their promises than those who reported that they do. After a politically uncertain, sometimes contentious election year, 2025 will see transitions of power begin to restore trust in government globally. This bump in trust will be short-lived, however. The honeymoon halo will wear off just as the newly elected governments begin to grapple with more serious challenges that can’t be solved quickly. Regulated and unregulated industries will diverge on trust. Two types of trust will emerge: enforced trust and performative trust. Regulated industries such as financial services and healthcare will live within enforced trust — where government regulations and oversight set the standards for transparency and accountability. On the other hand, unregulated industries will operate in the environment of performative trust, where they’re not required by law to act in ways to build trust but instead are motivated by fear of brand damage, public condemnation, and other potential financial loss. In 2025, Forrester predicts a divergence of trust between these two types of industries in which regulated industries maintain current levels of trust while nonregulated industries see levels of trust erode. AI-powered skill intelligence and career tools will increase employees’ trust in AI. One realization companies had in the brouhaha of AI was that a big portion of their employee base required reskilling (learning completely new skills for emerging jobs) or upskilling (learning additional skills in their existing jobs). Forrester’s 2024 data shows that among the skills that global business and tech professionals say they need most to support their organization’s modernization in the next 12 months is skills in data and analytics. Familiarity breeds trust, as progressive organizations will leverage AI to analyze their workforce’s skill data to solve complex challenges such as talent redeployment, upskilling or reskilling, and to help scale employee experience initiatives that currently require too much manual effort. If you’re a Forrester client, check out the Predictions 2025: Trust report. There, you can read all five of our 2025 trust predictions in full and get more details on what they mean for you and your organization in the upcoming year. Set up a Forrester guidance session with us to discuss how to apply these predictions and best practices to stay ahead of trust in 2025. If you aren’t yet a Forrester client, you can visit our Predictions 2025 hub to register for webinars and download one of our complimentary Predictions guides, which provide more insight into our predictions for 2025. source

Predictions 2025: AI’s Mishaps And Patchy Rules Lead To Uneven Pockets Of Trust Read More »

Decline of X is an opportunity to do social media differently – but combining ‘safe’ and ‘profitable’ will still be a challenge

It’s now almost two years since Elon Musk concluded his takeover of Twitter (now called X) on 27 October 2022. Since then, the platform has become an increasingly polarised and divisive space. Musk promised to deal with some of the issues which had already frustrated users, particularly bots, abuse and misinformation. In 2023, he said there was less misinformation on the platform because of his efforts to tackle the bots. But others disagree, claiming that misinformation is still rife there. A potential reaction to this may be apparent in recent data highlighted by the Financial Times, which showed the number of UK users of the platform had fallen by one-third, while US users had dropped by one-fifth. The data used to reach these conclusions may be open to question, as it is hard to find out user numbers directly from X. The figures also come out against the background of a disagreement over whether X’s traffic is waning or not. But there has been a notable trend in academia for individuals and some organisations to leave for alternative platforms such as Bluesky and Threads, or to quit social media altogether. Elon Musk has claimed that X is hitting record highs in user-seconds, a measure of how long users are spending on the site. But advertising revenue is reported to have dropped sharply amid Musk’s controversial changes, such as his “free speech” approach on the platform. If so, it will be reflected in the platform’s financial performance which has been dire. The platform currently has no clear pathway to profitability. X’s loss has naturally been a gain for its competitors. Despite a rather slow start due to its “invite only” model, Bluesky recently announced that it had topped 10 million users. This is still quite small compared to X’s 550 million users and Threads’ 200 million users. But there are questions with all platforms over how active users are and the proportion of bots versus human users. Threads also benefits by being connected to Instagram. The world’s richest man can afford to let X devalue from his purchase price of US$44 billion (£33.7 billion). Likewise, Meta can probably afford to prop up Threads. But Bluesky will have to find inventive ways to remain viable as a platform. So is it the right time for users to try something completely different on social media? Alternatives to X have to be mindful of striking the right balance between being a viable social media platform and not developing the same issues that have turned X toxic for many users. Elon Musk bought Twitter in 2022. Frederic Legrand – Comeo / Shutterstock The approach taken by Bluesky and Mastodon is to engage with their community more to deal with issues such as abuse and fake information. Moderating content is tricky, as it requires a lot of resources and support for those using the platform. But the contrast with Elon Musk’s approach to ownership is stark. The problem for Bluesky, and to a lesser extent Mastodon, is that once a platform gains traction it also attracts those with bad intent. Think of it as the one nice, cool bar in town that suddenly becomes popular. Once everyone hears about the bar, the troublemakers start to arrive. When that happens, the good people have to find a bar elsewhere. Once an alternative platform becomes a means to reach many millions, the people that drove users away from X may head there like moths to a light. Alternative approaches One possible solution is a subscription model for social media alongside paid advertisements. For growing platforms, such as Bluesky, sponsored posts and adverts will come as the user base grows in numbers. But as was evident with X, that is unlikely to be enough. X’s annual revenue peaked at US$5 billion (£3.8 billion) in 2021 and has been in decline ever since. This also takes into account how the platform has culled thousands of jobs in the past two years. The subscription model is not new to social media. X has its own paid-for blue checkmark and LinkedIn has a premium subscription. This alone still does not guarantee a profitable or functioning social media platform. Having a subscription-based social media platform is not exactly equitable either, as not everyone can afford to pay. The question is how much people would be willing to pay for a social media subscription that guarantees no adverts and bots, as well as proper moderation to remove abusive and fake information accounts. The trade off is that free users would have to deal with the inconvenience of adverts on their timelines. There could be other models floated where non-profit and student accounts are cheaper, but this again excludes other users. It also may not sit well with shareholders focused on profitability. As it stands, if all 10 million Bluesky users paid £5 a month to the platform, it would generate £60 million a year. That is not even close to X’s revenue of US$300 million (£230 million) back in 2012. Real change People moving to a new social media platform will want assurances that it won’t turn into another X. Organisations and individuals with large followings may also be reluctant to invest time in new platforms when they still get something out of the old. There are big, mainstream alternatives of course: Instagram, Facebook and TikTok, but Twitter offered something different. Real change could happen when the organisations leaving X due to how it has been run reaches a critical mass, though what that threshold represents is open to question. Those in the world of academia are cautious and at best hedging their bets, as I have found with my own search. Just as X increasingly fails to deal with misinformation, it is leaning further into the same headwind as right-wing platforms such as Truth Social. The newer platforms might find themselves a safer haven for now, but that is likely to change if lessons around ownership, funding and moderation are not learned.

Decline of X is an opportunity to do social media differently – but combining ‘safe’ and ‘profitable’ will still be a challenge Read More »

Anthropic updates its risk governance framework

The launch this week by Anthropic of an update to its Responsible Scaling Policy (RSP), the risk governance framework it says it uses to “mitigate potential catastrophic risks from frontier AI systems,” is part of the company’s push to be perceived as an AI safety first provider compared to its competitors such as OpenAI, an industry analyst said Wednesday. Thomas Randall, director of AI market research at Info-Tech Research Group said that while there will not be immediate business benefits that come from the changes, the firm’s founding was “grounded in two OpenAI executives leaving that company due to concerns about OpenAI’s safety commitment.” In the executive summary of the updated RSP, Anthropic stated, “in September 2023, we released our Responsible Scaling Policy (RSP), a public commitment not to train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels. We are now updating our RSP to account for the lessons we have learned over the last year. This updated policy reflects our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable.” source

Anthropic updates its risk governance framework Read More »

Election Records Law Needs Update, Mich. Justice Says

By Carolyn Muyskens ( October 25, 2024, 5:57 PM EDT) — The Michigan Supreme Court declined on Friday to revive criminal charges against an election worker who downloaded a copy of a voter list onto a personal thumb drive, prompting one justice to argue the law he was cleared of violating is out of touch in the digital age.  … Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Election Records Law Needs Update, Mich. Justice Says Read More »

The Next Generation Will Be the Driving Force Behind AI Regulation

The wide-scale introduction of artificial intelligence sent shockwaves through every industry, as it disrupted the way we live, work, and even learn. In the education sphere specifically, it’s caused traditional educators to experience a “Gutenberg printing press shock,” as much of their skills have essentially become obsolete overnight. AI’s quick rise has raised fear of risks such as plagiarism and lessened student engagement, causing many learning institutions to restrict or in some cases even ban the technology from classrooms. While we acknowledge and understand the potential risks associated with AI, I believe there is a lot more opportunity for the good of humanity than harm — if harnessed properly and responsibly, this groundbreaking technology has the potential to support and augment students’ learning exponentially — much like the printed book, the calculator, or the computer has done for previous generations. So, the question is not if we should harness AI, but how we should harness AI. It’s clear the technology needs guardrails. In fact, many groups from government officials and business leaders to celebrities like Tom Hanks have joined the debate on AI regulation. Yet, world leaders have been slow to act, and efforts have been restricted to national and regional spheres Related:The Blinking of ChatGPT Why the reluctance and the emphasis on local perspectives? Even during the peak of the Cold War, opposing factions aimed for international consensus, especially on ethical norms or ‘red lines’ related to nuclear weapon usage. Some theorize that this hesitancy toward AI regulation stems largely from their insufficient grasp of the technology and its ramifications. Why wouldn’t we engage the generation that seamlessly integrates AI into their daily routines? Undoubtedly, they not only have viewpoints on the matter but can also provide a more expansive and insightful perspective on the ethics of the technology. A proactive group of international students aged 13 to 18 from Institut auf dem Rosenberg decided to take the initiative and developed a 13-point charter to govern AI, calling for world leaders to promptly regulate AI development and usage through an international treaty and regulatory agency. A selection of the students’ proposed guardrails as a seed for global accord include: Control input and output. All organizations, whether private or state, engaged in designing, engineering, and/or distributing AI products, shall be held unequivocally accountable for the information generated by AI systems. These organizations must establish specialized departments amalgamating human oversight and automated technologies grounded in machine learning to guarantee the responsible utilization of AI. An external, impartial global agency shall meticulously oversee and ensure strict adherence to proper AI usage, conferring AI-Safe-Use approval badges exclusively upon organizations that diligently comply with AI standards. Transparent tracing of sources. Complete transparency in acknowledging the entities responsible for AI processes is imperative. Therefore, all AI-processed information must be transparently traceable to its origins, specifically attributing it to the entities conducting the information processing using AI. Users shall enjoy unrestricted access to all original input data employed by AI systems. Violations of source tracing obligations will be met with resolute legal enforcement. Regulation of deepfakes. Mandatory watermarks or detectable patterns are recommended for all deepfake or artificially created content. We advocate increased investment in deepfake detection technologies. Unethical deepfake actions, including defamation and identity theft, must be unequivocally prosecutable offenses. AI systems shall rigorously maintain accessible interaction histories, with AI software manufacturers being legally accountable for verifying the origin of disseminated information. Prevention of monopolies and duopolies. In the pursuit of equitable AI development and access, signatory parties solemnly pledge to actively champion diversity and counteract monopolies, duopolies, or oligopolies within the AI creative sphere. This commitment aims to foster innovation, fairness, and global collaboration. Support for cultural and academic endeavors. AI programs must be designed exclusively to support cultural and academic creators, refraining from autonomous generation of cultural and academic content. The excerpts provided are just a glimpse into the thorough work of our students. The question about ethics in AI is one that has the potential to bring together a polarized world for the greater good of all mankind — it is an opportunity that we should give the next generation. For a detailed insight into the Rosenberg AI Charter and this significant project, please visit here. source

The Next Generation Will Be the Driving Force Behind AI Regulation Read More »

Can Security Experts Leverage Generative AI Without Prompt Engineering Skills?

Professionals across industries are exploring generative AI for various tasks — including creating information security training materials — but will it truly be effective? Brian Callahan, senior lecturer and graduate program director in information technology and web sciences at Rensselaer Polytechnic Institute, and Shoshana Sugerman, an undergraduate student in this same program, presented the results of their experiment on this topic at ISC2 Security Congress in Las Vegas in October. Experiment involved creating cyber training using ChatGPT The main question of the experiment was “How can we train security professionals to administer better prompts for an AI to create realistic security training?” Relatedly, must security professionals also be prompt engineers to design effective training with generative AI? To address these questions, researchers gave the same assignment to three groups: security experts with ISC2 certifications, self-identified prompt engineering experts, and individuals with both qualifications. Their task was to create cybersecurity awareness training using ChatGPT. Afterward, the training was distributed to the campus community, where users provided feedback on the material’s effectiveness. The researchers hypothesized that there would be no significant difference in the quality of training. But if a difference emerged, it would reveal which skills were most important. Would prompts created by security experts or prompt engineering professionals prove more effective? SEE: AI agents may be the next step in increasing the complexity of tasks AI can handle. Must-read security coverage Training takers rated the material highly — but ChatGPT made mistakes The researchers distributed the resulting training materials — which had been edited slightly, but included mostly AI-generated content — to the Rensselaer students, faculty, and staff. The results indicated that: Individuals who took the training designed by prompt engineers rated themselves as more adept at avoiding social engineering attacks and password security. Those who took the training designed by security experts rated themselves more adept at recognizing and avoiding social engineering attacks, detecting phishing, and prompt engineering. People who took the training designed by dual experts rated themselves more adept on cyberthreats and detecting phishing. Callahan noted that it seemed odd for people trained by security experts to feel they were better at prompt engineering. However, those who created the training didn’t generally rate the AI-written content very highly. “No one felt like their first pass was good enough to give to people,” Callahan said. “It required further and further revision.” In one case, ChatGPT produced what looked like a coherent and thorough guide to reporting phishing emails. However, nothing written on the slide was accurate. The AI had invented processes and an IT support email address. Asking ChatGPT to link to RPI’s security portal radically changed the content and generated accurate instructions. In this case, the researchers issued a correction to learners who had gotten the inaccurate information in their training materials. None of the training takers identified that the training information was incorrect, Sugerman noted. Disclosing whether trainings are AI-written is key “ChatGPT may very well know your policies if you know how to prompt it correctly,” Callahan said. In particular, he noted, all of RPI’s policies are publicly available online. The researchers only revealed the content was AI-generated after the training had been conducted. Reactions were mixed, Callahan and Sugerman said: Many students were “indifferent,” expecting that some written materials in their future would be made by AI. Others were “suspicious” or “scared.” Some found it “ironic” that the training, focused on information security, had been created by AI. Callahan said any IT team using AI to create real training materials, as opposed to running an experiment, should disclose the use of AI in the creation of any content shared with other people. “I think we have tentative evidence that generative AI can be a worthwhile tool,” Callahan said. “But, like any tool, it does come with risks. Certain parts of our training were just wrong, broad, or generic.” A few limitations of the experiment Callahan pointed out a few limitations of the experiment. “There is literature out there that ChatGPT and other generative AIs make people feel like they have learned things even though they may not have learned those things,” he explained. Testing people on actual skills, instead of asking them to report whether they felt they had learned, would have taken more time than had been allotted for the study, Callahan noted. After the presentation, I asked whether Callahan and Sugarman had considered using a control group of training written entirely by humans. They had, Callahan said. However, dividing training makers into cybersecurity experts and prompt engineers was a key part of the study. There weren’t enough people available in the university community who self-identified as prompt engineering experts to populate a control category to further split the groups. The panel presentation included data from a small initial group of participants — 51 test takers and three test makers. In a follow-up email, Callahan told TechRepublic that the final version for publication will include additional participants, as the initial experiment was in-progress pilot research. Disclaimer: ISC2 paid for my airfare, accommodations, and some meals for the ISC2 Security Congress event held Oct. 13–16 in Las Vegas. source

Can Security Experts Leverage Generative AI Without Prompt Engineering Skills? Read More »

IT pros: One-third of AI projects just for show

“This can lead to a dangerous cycle where decision-makers become skeptical of AI’s potential, reducing future investment,” Nagaswamy says. “The long-term impact is even more worrying — companies risk falling behind competitors who are implementing AI strategically. Their teams miss out on crucial learning experiences, leaving them ill-equipped to handle genuine AI deployments down the road.” Misunderstanding AI A huge part of the problem is a lack of understanding of AI’s capabilities, adds Matt Rosen, CEO of digital consulting firm Allata. In many cases, board members, investors, or executives push for projects that AI is ill-suited to address. “You don’t have business leaders, or even IT leaders taking some basic AI literacy classes,” he says. “There’s some fundamental misunderstanding about what problems AI solves, and there needs to be a continuous curiosity and learning, not only from the IT professionals, but from the IT leadership and then the business executives that are expecting technology solutions to be delivered.” source

IT pros: One-third of AI projects just for show Read More »

Calif. Judge Urged To Uphold $262M Hard Drive IP Verdict

By Hannah Albarazi ( October 24, 2024, 9:44 PM EDT) — MR Technologies has asked a California federal judge to deny Western Digital’s bid to toss a $262 million patent infringement verdict in a dispute over disk drive storage technology, saying the hard drive behemoth’s desire for a redo is outweighed by its failure to present any legal errors or abuse of discretion by the court…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Calif. Judge Urged To Uphold $262M Hard Drive IP Verdict Read More »