NJ Statehouse Catch-Up: Offshore Wind, AI, Neurodiversity

By George Woolston ( February 7, 2025, 9:53 PM EST) — The retraction of New Jersey’s fourth offshore wind solicitation came alongside a wave of legislative and regulatory activity that also proposed workplace rules to bolster inclusivity and a new compensation path for assault victims… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

NJ Statehouse Catch-Up: Offshore Wind, AI, Neurodiversity Read More »

U.K.’s International AI Safety Report Highlights Rapid AI Progress

A new report published by the U.K. government says that OpenAI’s o3 model has made a breakthrough on an abstract reasoning test that many experts thought “out of reach.” This is an indicator of the pace that AI research is advancing at, and that policymakers may soon need to decide whether to intervene before there is time to gather a large pool of scientific evidence. Without such evidence, it cannot be known whether a particular AI advancement presents, or will present, a risk. “This creates a trade-off,” the report’s authors wrote. “Implementing pre-emptive or early mitigation measures might prove unnecessary, but waiting for conclusive evidence could leave society vulnerable to risks that emerge rapidly.” In a number of tests of programming, abstract reasoning, and scientific reasoning, OpenAI’s o3 model performed better than “any previous model” and “many (but not all) human experts,” but there is currently no indication of its proficiency with real-world tasks. SEE: OpenAI Shifts Attention to Superintelligence in 2025 AI Safety Report was compiled by 96 global experts OpenAI’s o3 was assessed as part of the International AI Safety Report, which was put together by 96 global AI experts. The aim was to summarise all the existing literature on the risks and capabilities of advanced AI systems to establish a shared understanding that can support government decision making. Attendees of the first AI Safety Summit in 2023 agreed to establish such an understanding by signing the Bletchley Declaration on AI Safety. An interim report was published in May 2024, but this full version is due to be presented at the Paris AI Action Summit later this month. o3’s outstanding test results also confirm that simply plying models with more computing power will improve their performance and allow them to scale. However, there are limitations, such as the availability of training data, chips, and energy, as well as the cost. SEE: Power Shortages Stall Data Centre Growth in UK, Europe The release of DeepSeek-R1 last month did raise hopes that the pricepoint can be lowered. An experiment that costs over $370 with OpenAI’s o1 model would cost less than $10 with R1, according to Nature. “The capabilities of general-purpose AI have increased rapidly in recent years and months. While this holds great potential for society,” Yoshua Bengio, the report’s chair and Turing Award winner, said in a press release. “AI also presents significant risks that must be carefully managed by governments worldwide.” More must-read AI coverage International AI Safety Report highlights the growing number of nefarious AI use cases While AI capabilities are advancing rapidly, like with o3, so is the potential for them to be used for malicious purposes, according to the report. Some of these use cases are fully established, such as scams, biases, inaccuracies, and privacy violations, and “so far no combination of techniques can fully resolve them,” according to the expert authors. Other nefarious use cases are still growing in prevalence, and experts are in disagreement about whether it will be decades or years until they become a significant problem. These include large-scale job losses, AI-enabled cyber attacks, biological attacks, and society losing control over AI systems. Since the publication of the interim report in May 2024, AI has become more capable in some of these domains, the authors said. For example, researchers have built models that are “able to find and exploit some cybersecurity vulnerabilities on their own and, with human assistance, discover a previously unknown vulnerability in widely used software.” SEE: OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds The advances in the AI models’ reasoning power means they can “aid research on pathogens” with the aim of creating biological weapons. They can generate “step-by-step technical instructions” that “surpass plans written by experts with a PhD and surface information that experts struggle to find online.” As AI advances, so do the risk mitigation measures we need Unfortunately, the report highlighted a number of reasons why mitigation of the aforementioned risks is particularly challenging. First, AI models have “unusually broad” use cases, making it hard to mitigate all possible risks, and potentially allowing more scope for workarounds. Developers tend to not fully understand how their models operate, making it harder to fully ensure their safety. The growing interest in AI agents — i.e., systems that act autonomously — presented new risks that researchers are unprepared to manage. SEE: Operator: OpenAI’s Next Step Toward the ‘Agentic’ Future Such risks stem from the user being unaware of what their AI agents are doing, their innate ability to operate outside of the user’s control, and potential AI-to-AI interactions. These factors make AI agents less predictable than standard models. Risk mitigation challenges are not solely technical; they also involve human factors. AI companies often withhold details about how their models work from regulators and third-party researchers to maintain a competitive edge and prevent sensitive information from falling into the hands of hackers. This lack of transparency makes it harder to develop effective safeguards. Additionally, the pressure to innovate and stay ahead of competitors may “incentivise companies to invest less time or other resources into risk management than they otherwise would,” the report states. In May 2024, OpenAI’s superintelligence safety team was disbanded and several senior personnel left amid concerns that “safety culture and processes have taken a backseat to shiny products.” However, it’s not all doom and gloom; the report concludes by saying that experiencing the benefits of advanced AI and conquering its risks are not mutually exclusive. “This uncertainty can evoke fatalism and make AI appear as something that happens to us,” the authors wrote. “But it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take.” source

U.K.’s International AI Safety Report Highlights Rapid AI Progress Read More »

Contract intelligence comes to PDF

According to a new Adobe Acrobat survey, 89% of knowledge workers encounter contracts on the job, with more than half (52%) saying they work with contracts at least weekly. The survey also found that 61% of knowledge workers have signed a contract at work without knowing what’s in it while 63% of technology leaders say difficulty interpreting contracts and confusing terms has caused business delays. Last year, Adobe introduced Acrobat AI Assistant, a conversational engine integrated deeply into Reader and Acrobat workflows that generates summaries and insights, answers questions, and can even format information for sharing in emails, reports, and presentations. Now Adobe is introducing new contract intelligence in Acrobat AI Assistant to help make navigating and understanding the information in contracts and agreements easier and faster. Accelerating contract tasks with AI Contract intelligence in Acrobat AI Assistant automatically recognizes when a document is a contract—including scanned documents—and tailors the experience, generating an overview, surfacing key terms in a single click, quickly summarizing information, and recommending questions. Users can quickly see differences between versions​, check for consistency, and catch discrepancies​ across up to 10 contracts—including scanned documents—and clickable citations make it fast and easy to navigate to the source and verify responses. While the new capabilities aren’t a substitute for professional legal advice, business users can leverage them to save time on tasks like identifying key dates in vendor contracts or preparing to review partnership agreements with legal. Finance teams can accelerate reviews of sales contracts and marketers can pinpoint changes in updated scopes of work and quickly find deliverables in brand and advertising partnerships.  Protecting data and enhancing reliability As the inventor and innovator of PDF, Adobe Acrobat has become a core productivity tool for more than 650 million monthly active users who open 400+ billion PDFs in the app each month.  Adobe Acrobat AI Assistant supplements LLM technologies with the same artificial intelligence and machine learning models behind Liquid Mode, the technology that supports responsive reading experiences for PDFs on mobile. These models provide a highly accurate understanding of PDF structure and content, enhancing the quality and reliability of AI Assistant’s outputs.  Budhaditya Baul, Director of Product Management at Adobe, manages Document Cloud’s Generative AI efforts for 0-1 products such as Liquid Mode, AI Assistant, and other projects currently in incubation. According to Baul, the team built additional prompt engineering and an intelligent framework on top of Acrobat AI Assistant’s core capabilities to help deliver more accurate and relevant responses specifically for contracts.   “Acrobat customers are already opening billions of contracts in the app every month,” said Baul. “By bringing contract-specific intelligence to Acrobat AI Assistant and also leveraging a custom-built intelligent citation engine to help customers quickly verify responses, we can make AI Assistant even more valuable for enterprises—all while keeping their data safe.” Learn more about AI Assistant. source

Contract intelligence comes to PDF Read More »

DOJ Tells DC Circ. Not To Delay Google Search Fix For Apple

By Matthew Perlman ( February 7, 2025, 9:41 PM EST) — The U.S. Department of Justice and state enforcers told the D.C. Circuit Friday that the remedies phase of the search monopolization case against Google is too important to wait while Apple appeals a ruling denying its last minute bid to intervene in the case…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

DOJ Tells DC Circ. Not To Delay Google Search Fix For Apple Read More »

You’re Overlooking The Key To Customer Service Automation: Tacit Knowledge

Customer service leaders have long been promised that AI is the silver bullet for all their (many) challenges: streamlining workflows, enhancing customer satisfaction, and, most importantly, reducing costs via lower call volumes. This enthusiasm has only intensified with the adoption of generative AI (genAI) in the enterprise. To be clear: GenAI will transform the customer service landscape. Large language models have significantly expanded AI’s potential in customer service, and the future looks bright. But — and it’s a big but — one pretty major oversight is holding back progress: Enterprises are not capturing the tacit knowledge that employees have but AI solutions don’t. What Does This Gap Mean? For years, contact center vendors have painted the vision of training AI to be as good as your “best” agents — you just need to train AI with your call transcripts. The problem with that approach is the transcript doesn’t capture many elements (and often, those elements aren’t captured anywhere). Tacit knowledge that AI systems don’t capture includes intuitive decisions that a human employee makes or context-dependent conditional knowledge — for example, undocumented exceptions to company policies. Without this knowledge, enterprises will struggle to automate anything beyond the simplest customer service inquiries. What Can You Do? Capturing this tacit knowledge must start with brands evolving to having AI — not humans — lead and own conversations from start to finish. Humans will then transition to supporting AI behind the scenes to problem-solve exceptions within AI-led interactions. We expect a new type of agent workspace to emerge — one that is specifically designed to capture tacit knowledge and begin putting it to work through AI models. Here’s a visual example of how things would change in this scenario:   If you think that this represents a big shift in how contact centers operate today, then you’re right; it certainly does. And that’s why we think the transition to fully AI-led customer service will take years to mature and will evolve across three phases. Learn about these phases — and what enterprises should do right now to prepare for the future of AI-led customer service — in our report, Tacit Knowledge Will Power The AI-Led Contact Center. Forrester clients can also book a guidance session with me to understand what’s ahead and start mapping their next steps to developing an AI-led contact center. source

You’re Overlooking The Key To Customer Service Automation: Tacit Knowledge Read More »

Newsletter: 10 Feb 2025

NEWS THIS WEEK Exciting collaboration announcement! We are excited to announce a new partnership between StartHub.Asia and Solvecube! This collaboration marks an important step in our efforts to foster innovation and talent circle within our community. SolveCube is founded and led by industry leaders with expertise in HCM, process engineering, building competency centers, organizational transformation, technology and process engineering, and AI technologies; providing on-demand workforce solutions that achieve global speed, accuracy, and scale; with the partnership and support of KPMG, Mercer, and IQEQ, currently with a global talent pool of more than 60,000 experts. Investor Scouting Explore More Startup Scouting Explore More Advisors Scouting Explore More

Newsletter: 10 Feb 2025 Read More »

Will AI Take Your B2B Marketing Job?

Ask business leaders why they are investing in AI technologies, and the answer is often emphatic and clear: It’s about efficiency. In Forrester’s 2025 B2B Brand And Communications Survey, 86% of marketing leaders said efficiency was the likeliest impact of AI technologies. Will AI Bring About A New Industrial Revolution? If we believe some of the most fervent supporters of AI, we are on the precipice of a white-collar industrial revolution. Just as manufacturing automation upended ways of working in the 1800s, these supporters also claim that AI will automate office work. In the pursuit of greater efficiency, they believe AI agents will not only take over routine, repetitive tasks and liberate us from drudgery, they will also seamlessly replace much of marketing, HR, accounting, and numerous other office roles. In this narrative, efficiency is really a euphemism: it stands in for “doing more with less,” and the “less” in this logic is people. B2B Marketers See AI As Transformative To Roles, Not Displacing Them Forrester data suggests B2B marketing leaders aren’t so glum or as hyperfocused on using AI to do more with fewer people. While 80% say AI will automate work currently done by people, just 27% think it will make jobs obsolete, and only 8% believe their jobs are at risk. In this narrative, AI will still be a revolution that presents new opportunities and drives career gains. But it’s less a “rip-and-replace” strategy for workers and more a technology transformation that will proliferate new roles and skills that are more strategic and valuable. Luddites Will Lose Wherever you are on this debate, one thing is clear: AI will change white-collar professions. Some changes will be profound, some subtle, but being a Luddite in this new industrial revolution isn’t a winning strategy. B2B business leaders must build a foundation for successful AI adoption, including assessing existing governance processes, IT alignment, and current skills. Then, they must prioritize competing AI use cases to focus on the best opportunities with the highest chance of success. Our research suggests that B2B organizations with strong CTO-CMO ties, an entrepreneurial culture, and a well-managed data infrastructure lead in AI adoption and reap the rewards faster. However, as these leading adopters proliferate AI in their companies, they tend to also experience an AI talent shortage. The bottom line is that the best way to safeguard your professional future is to embrace today’s AI revolution and not sit on the sidelines. source

Will AI Take Your B2B Marketing Job? Read More »

This AI Generator Can Develop Content That's Fact-Checked

TL;DR: Save 58% on Katteb, an AI content generator that can create more than 30 media types, from articles to product reviews. Have you ever needed AI to create an article or blurb for a newsletter only to get the result, double-check it, and realize that the facts weren’t accurate? If so, you’re not the only one — it’s tricky to customize your prompt and ensure everything the chatbot creates is fully fact-checked. Instead of dealing with subpar results and doing the additional legwork of fact-checking, let Katteb create content instead. This AI generator is designed to save you time by creating articles, product reviews, and so much more content that’s actually error- and plagiarism-free. Grab it while lifetime access is only $79.99 (reg. $195) while supplies last. Your new favorite AI tool Imagine having an AI tool that does as it’s told — and does it correctly. That’s what having Katteb makes a reality. Instead of double-checking what AI’s written for you, you could reallocate that time to other time-consuming tasks. Check out what Katteb AI can do for you: Generate fact-checked and SEO-optimized articles with in-text citations and relevant images up to 2,500 words with just one click. Rewrite web pages and offline text in more than 110 languages while preserving the HTML formatting. Read product specs and prices on Amazon to develop original Amazon product reviews in a single click. Write content based on YouTube videos lasting up to 30 minutes in 110+ languages. Export your generated content to WordPress, Blogger, external files, etc. Plus, Katteb is dedicated to creating content free from errors and plagiarism. The platform has its own innovative proofreading tools, which support over 25 languages. They will ensure everything from web page summaries to articles has zero errors. It can even sniff out any plagiarism in your text and rewrite content if uncovered. Grab lifetime access to the Katteb AI content generator while its price drops to just $79.99. Act now while inventory is still available! Prices and availability are subject to change. Katteb AI Content Generator: Lifetime Subscription Only $79.99 at TechRepublic StackSocial prices subject to change. source

This AI Generator Can Develop Content That's Fact-Checked Read More »

U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In an important and helpful update issued today, the U.S. Copyright Office — which administers copyright protections from the government to human-authored works such as films, TV shows, novels, art, music, even software — clarified that some forms of AI generated content can, in fact, receive copyright protection, provided that a human substantially contributed or changed the content in question. The clarity came in a new document, “Copyright and Artificial Intelligence, Part 2: Copyrightability” (a PDF is embedded below), the second portion of a report that was initially released in July 2024. The report confirms that human creativity remains central to copyright law and intellectual property (IP) rights, even as AI tools become more widely used in artistic and commercial creation. But it should also give enterprises, in particular, reassurance that their brands and IP will remain protected even when they integrate distinctive products and brand marks into AI generated media, such as Coca Cola’s controversial AI holiday commercial released late last year. It marks something of an about-face for the Copyright Office, after it issued, then rescinded a copyright protection to Kris Kashtanova, an artist and AI evangelist for Adobe, on her graphic novel “Zarya of the Dawn,” who created the images using AI image generator Midjourney (which VentureBeat also uses, including for this article header). Reacting to today’s news, Kashtanova wrote on the social network X: “Two years ago I started advocating for copyright in AI. It was first Zarya of the Dawn and then Rose Enigma I did for this. It’s a small step forward and I am so happy today. AI work can be copyrighted. Your work matters. AI are tools for creativity (not replacement of it).” The Copyright Office also said a third section of this same report will be issued in the future to address the legal implications of training AI on copyrighted material, including licensing and liability. That third section should be a big deal for AI image, video and music generating companies, not to mention large language model (LLM) providers such as OpenAI, Anthropic, Google, Meta and numerous others — as they are all said to have trained on vast quantities of copyrighted material without express permission and are currently facing various lawsuits from human creators as a result. What qualifies for copyright in the AI generated era of content The report reaffirms the longstanding principle that copyright applies only to human creativity. While AI can serve as a tool in the creative process, its outputs are not copyrightable unless a human author has exercised sufficient creative control. The Copyright Office outlines three key scenarios where AI-generated material can apply for, and receive, an official certificate of copyright from the office: When human-authored content is incorporated into the AI output. When a human significantly modifies or arranges the AI-generated material. When the human contribution is sufficiently expressive and creative. In addition, the Copyright Office makes clear that using AI in the creative process does not disqualify a work from copyright protection. AI can assist with: Editing and refining text, images or music. Generating drafts or preliminary ideas for human creators to shape. Acting as a creative assistant while the human determines the final expression. As long as human authorship remains a core part of the final work, copyright protection can still apply. However, merely providing text prompts to an AI system is not enough to establish authorship. The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. No legislative changes recommended After analyzing public feedback — including more than 10,000 comments from creators, legal experts and technology companies — the Copyright Office found no immediate need for new legislation, stating that the current laws around copyright in the U.S. should stand the test of time. While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues. The Office did, however, acknowledge that it will continue monitoring technological developments and legal interpretations to determine if future changes are warranted. Shira Perlmutter, register of copyrights and director of the U.S. Copyright Office, emphasized the importance of human creativity in the copyright system: “After considering the extensive public comments and the current state of technological development, our conclusions turn on the centrality of human creativity to copyright. Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.” Additionally, the Copyright Office plans to update its official Compendium of Copyright Practices to provide clearer guidelines for creators using AI tools. AI creators celebrate the news As news of the Copyright Office’s new document spread across social media, particularly on X — the unofficial nexus of AI research

U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it Read More »

Accountant-Owned Law Firms Could Blur Ethical Lines

By Seth Laver | February 7, 2025, 1:57 PM EST · Listen to article Your browser does not support the audio element. Seth Laver In a novel move, Big Four accounting firm KPMG LLP has taken the first step in seeking to own and operate a law firm in the U.S. Although permitted in other countries, the U.S. generally prohibits nonlawyers from law firm ownership. In 2020, Utah and Arizona became the first states to relax that standard, thereby opening the possibility of what is apparently on the horizon. Online legal providers were the first to act, and now, accounting firms have entered the fold. There are roughly 1.3 million attorneys in the U.S. Competition is considerable, and attorneys strive to develop a brand, a client base and some way to set ourselves apart from the rest of the pack. That pack has exclusively consisted of other members of the bar who have met the specific requirements necessary to practice law under the American Bar Association‘s Model Rules of Professional Conduct, which govern attorneys. Those rules prohibit nonlawyers from owning law firms in the U.S. due to ethical concerns regarding conflict-of-interest principles.[1] A far cry from regulating attorneys, enforcing and supervising the practice of law by nonattorneys could prove challenging. Attorney oversight or, more to the point, the supervision of the practice of law, is of paramount concern not only to control and guide the practice of law, but the governing rules also help foster a sense of trust and transparency between practitioners and the public. Whether it be an accountant, an attorney, or anyone — or anything — in between, we must collectively ensure that the ethical and procedural rules governing the practice of law apply uniformly and fairly, and with an eye toward the client’s best interests. Rule 5.4, titled “Professional Independence of a Lawyer,” generally prohibits an attorney or firm from sharing fees with a nonlawyer, and it prohibits an attorney from forming “a partnership with a nonlawyer if any of the activities of the partnership consist of the practice of law.”[2] The stated goal of the rule is to “protect the lawyer’s professional independence of judgment.”[3] As Stephen P. Younger, past president of the New York State Bar Association, wrote in a 2022 Yale Law Journal article, The restrictions imposed by the Rule aim to address the concern that if non-lawyers, who are not bound by the Rules of Professional Conduct, have a financial interest in a lawyer’s profits, they might prioritize profit over the duties the lawyer owes to clients and adversely influence a lawyer’s conduct.[4] In most jurisdictions, law school graduates are ineligible for admission to the bar without passing a bar exam. Attorneys must take continued education courses, some geared specifically to conflicts of interest and the rules governing the duties afforded to clients. The cost of violating those rules can be incredibly severe, including disbarment. According to Younger, the applicable governing bodies may face difficulties in ensuring that “nonlawyers would uphold the same ethical duties if they were allowed to be involved in providing legal services.”[5] The regulation of educated and trained attorneys is no easy task, but it may ultimately prove less demanding than enforcing those rules against those without that level of legal training and experience. On the other hand, proponents of the recent reforms disagree with the traditional restrictions of Rule 5.4. In 2020, amid its efforts to abandon Rule 5.4, the Arizona task force responsible for the amendments said in a statement that it was driven by “an ethical obligation to assure that legal services are available to the public and that if the rules stand in the way of making those services available, the rules should change.”[6] According to recent studies, a disproportionate number of Americans cannot effectively engage the legal profession. Reportedly, nearly 80% of the 20 million civil cases filed in state courts each year involve at least one unrepresented party,[7] and more than half of small businesses facing a legal issue cannot engage counsel.[8] While there are many causes contributing to this wide justice gap, there is a growing academic consensus that the strict regulations limiting who may provide legal services and how those services may be funded are a major driver.[9] Enter Utah and Arizona, which have been leaders in changing law firm ownership structures and the delivery of legal services. For its part, in August 2020, the Arizona Supreme Court voted on “far-reaching changes that could transform the public’s access to legal services.”[10] According to Arizona Supreme Court Chief Justice Robert Brutinel, the goal of the amendments was to “improve access to justice and to encourage innovation in the delivery of legal services.” By way of its reforms, Arizona now permits nonlawyer “legal paraprofessionals” to provide limited legal services to clients, including court appearances. Moreover, Arizona rescinded the traditional Rule 5.4 ban on nonlawyer ownership of a law firm or fee-sharing. In its place, Arizona enacted a regulatory framework to license alternative business structures. Arizona has reportedly approved more than 100 alternative business structures, including legal services and staffing companies such as LegalZoom, as well as large personal injury firms partially owned by nonlawyers. Similarly, Utah initiated a program in 2020 to allow nontraditional legal businesses to operate under eased rules with oversight. Reportedly, KPMG Law US’ recent application to the Arizona Supreme Court comprised the first overture by an accounting firm to take advantage of these laws. A court committee is considering the application and whether to provide the required licensure. According to a Reuters article, KPMG said it would lean “on [its] network and technology to provide compliance and contract-related services and other outsourced legal work in the United States.” We can expect considerable debate and potential pushback. Notably, the ABA House of Delegates reaffirmed its commitment to Rule 5.4 and overwhelmingly passed a resolution stating that any modification to the rule, as drafted, is “inconsistent with the core values of the legal profession.” The ABA encouraged states to “innovate and to experiment” to improve

Accountant-Owned Law Firms Could Blur Ethical Lines Read More »