Musi Loses Early Bid To Get Back On App Store

By Theresa Schliep ( January 31, 2025, 9:00 PM EST) — A California federal judge has rejected a music streaming service’s initial bid to be restored to Apple’s App Store after it had been removed for alleged intellectual property infringement, saying that the tech giant has “broad discretion” to delete apps from its marketplace…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Musi Loses Early Bid To Get Back On App Store Read More »

Sam Altman admits OpenAI was ‘on the wrong side of history’ in open source debate

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Sam Altman, CEO of OpenAI, made a striking admission on Friday that his company has been “on the wrong side of history” regarding open source AI, signaling a potential seismic shift in strategy as competition from China intensifies and efficient open models gain traction. The candid acknowledgment came during a Reddit “Ask Me Anything” session, just days after Chinese AI firm DeepSeek rattled global markets with its open source R1 model that claims comparable performance to OpenAI’s systems at a fraction of the cost. “Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission. Sam Altman, the chief executive of OpenAI, acknowledged in a Reddit forum on Thursday that the company needs to reconsider its closed-source approach to artificial intelligence, though he noted internal disagreement on the issue. (Credit: Reddit) Sam Altman on DeepSeek: ‘We will maintain less of a lead’ Altman’s comments come amid market turmoil triggered by DeepSeek’s emergence. The Chinese company’s claims of building advanced AI models for just $5.6 million in training costs (though total development costs are likely much higher) sent Nvidia’s stock plummeting, wiping out nearly $600 billion in market value—the largest single-day drop for any U.S. company in history. “We will produce better models, but we will maintain less of a lead than we did in previous years,” Altman acknowledged in the same AMA, addressing DeepSeek’s impact directly. Sam Altman, the chief executive of OpenAI, acknowledged in a Reddit forum on Friday that DeepSeek’s model is “very good” and predicted his company would “maintain less of a lead than we did in previous years” in AI development. (Credit: Reddit) Sam Altman admits OpenAI’s closed strategy may be flawed DeepSeek’s breakthrough, whether or not its specific claims prove accurate, has highlighted shifting dynamics in AI development. The company says it achieved its results using only 2,000 Nvidia H800 GPUs—far fewer than the estimated 10,000+ chips typically deployed by major AI labs. This approach suggests that algorithmic innovation and architectural optimization might matter more than raw computing power. The revelation threatens not just OpenAI’s technical strategy, but its entire business model built on exclusive access to massive computational resources. The open source debate: innovation vs. security However, DeepSeek’s rise has also intensified national security concerns. The company stores user data on servers in mainland China, where it could be subject to government access. Several U.S. agencies have already moved to restrict its use, with NASA becoming the latest to block the application citing “security and privacy concerns.” OpenAI’s potential pivot to open source would mark a return to its roots. The company was founded as a non-profit in 2015 with the mission of ensuring artificial general intelligence benefits humanity. However, its transition to a “capped-profit” model and increasingly closed approach has drawn criticism from open source advocates. “The correct reading is: ‘Open source models are surpassing proprietary ones,’” wrote Meta’s chief AI scientist Yann LeCun on LinkedIn, responding to DeepSeek’s emergence. “They came up with new ideas and built them on top of other people’s work. Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.” A new chapter in AI development While Altman’s comments suggest a strategic shift may be coming, he emphasized that open source isn’t currently OpenAI’s top priority. This hesitation reflects the complex reality facing AI leaders: balancing innovation, security, and commercialization in an increasingly multipolar AI world. The stakes extend far beyond OpenAI’s bottom line. The company’s decision could reshape the entire AI ecosystem. Open-sourcing key models could accelerate innovation and democratize access, but it might also complicate efforts to ensure AI safety and security—core tenets of OpenAI’s mission. The timing of Altman’s admission, coming after DeepSeek’s market shock rather than before it, suggests that OpenAI may be reacting to market forces rather than leading them. This reactive stance marks a striking role reversal for a company that has long positioned itself as AI’s north star. As the dust settles from DeepSeek’s debut, one thing becomes clear: the real disruption isn’t just about technology or market value—it’s about challenging the assumption that closely guarded AI models are the surest path to artificial general intelligence. In that light, Altman’s admission might be less about being on the wrong side of history and more about recognizing that history itself has changed course. source

Sam Altman admits OpenAI was ‘on the wrong side of history’ in open source debate Read More »

Bezos Satellite Co. Gets Reprieve In Docs Fight With His Paper

By Greg Lamm ( January 31, 2025, 7:28 PM EST) — A Washington state court official has temporarily blocked the state labor department from releasing records linked to investigations at an internet satellite facility launched by Jeff Bezos’ Amazon, in a public records battle with The Washington Post, a newspaper also owned by the billionaire…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Bezos Satellite Co. Gets Reprieve In Docs Fight With His Paper Read More »

Essential principles to produce and consume data for AI acceleration

This is a VB Lab Insights article presented by Capital One. AI offers transformative potential, but unlocking its value requires strong data management. AI builds on a solid data foundation that can iteratively improve, creating a flywheel effect between data and AI. This flywheel enables companies to build more customized, real-time solutions that unlock impact for their customers and the business. Managing data in today’s world is not without complexity. Data volume is skyrocketing, with research showing it’s doubled in the last five years alone. As a result, 68% of data available to enterprises is left untapped. Within that data, there’s a huge variety of structures and formats, with MIT noting that around 80-90% of data is unstructured — fueling complexity in putting it to use. And finally, the velocity at which data needs to be deployed to users is accelerating. Some use cases call for sub-10 millisecond data availability, or in other words, ten times faster than the blink of an eye. The data ecosystems of today are big, diverse and fast — and the AI revolution is further raising the stakes on how companies manage and use data. Fundamentals for great data The data lifecycle is complicated and unforgiving, often involving many steps, many hops and many tools. This can lead to disparate ways of working with data and varying levels of maturity and instrumentation to drive data management. To empower users with trustworthy data for innovation, we need to first tackle the fundamentals of managing great data: self-service, automation and scale. Self-service means empowering users to do their job with minimal friction. It covers areas like seamless data discovery, ease of data production and tools that democratize data access. Automation ensures that all core data management capabilities are embedded in the tools and experiences that enable users to work with data. Data ecosystems need to scale — especially in the AI era. Among other considerations, enterprises need to consider the scalability of certain technologies, resilience capabilities and service level agreements that set baseline obligations for how data is to be managed (as well as enforcement mechanisms for such agreements). These principles lay the foundation to produce and consume great data. Producing great data Data producers are responsible for onboarding and organizing data, enabling quick and efficient consumption. A well-designed, self-service portal can play a key role here by allowing producers to interact seamlessly with systems across the ecosystem — such as storage, access controls, approvals, versioning and business catalogs. The goal is to create a unified control plane that mitigates the complexity of these systems, making data available in the right format, at the right time and in the right place. To scale and enforce governance, enterprises can choose between a central platform and a federated model — or even adopt a hybrid approach. A central platform simplifies data publishing and governance rules, while a federated model offers flexibility, using purpose-built SDKs to manage governance and infrastructure locally. The key is to implement consistent mechanisms that ensure automation and scalability, enabling the business to reliably produce high-quality data that fuels AI innovation. Consuming great data Data consumers — such as data scientists and data engineers — need easy access to reliable, high-quality data for rapid experimentation and development. Simplifying the storage strategy is a foundational step. By centralizing compute within the data lake and using a single storage layer, enterprises can minimize data sprawl and reduce complexity by enabling compute engines to consume data from a single storage layer. Enterprises should also adopt a zone strategy to handle diverse use cases. For instance, a raw zone may support expanded data and file types such as unstructured data, while a curated zone enforces stricter schema and quality requirements. This setup allows for flexibility while maintaining governance and data quality. Consumers can use these zones for activities like creating personal spaces for experimentation or collaborative zones for team projects. Automated services ensure data access, lifecycle management and compliance, empowering users to innovate with confidence and speed. Lead with simplicity Effective AI strategies are grounded in robust, well-designed data ecosystems. By simplifying how you produce and consume data — and improving the quality of said data — businesses can empower users to innovate in new performance-driving areas with confidence. As a foundation, it’s paramount that businesses prioritize ecosystems and processes that enhance trustworthiness and accessibility. By implementing the principles outlined above, they can do just that –building scalable and enforceable data management that will power rapid experimentation in AI and ultimately deliver long-term business value. Marty Andolino is VP, Software Engineering at Capital OneKajal Wood is Sr. Director, Software Engineering at Capital One VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected]. source

Essential principles to produce and consume data for AI acceleration Read More »

5. What role should religion play in Muslim- and Jewish-majority countries?

In a number of countries with sizable Muslim and Jewish populations, we asked Muslim and Jewish adults for their views on religion and governance – specifically, whether religious law should be the official or state law for people who share their religion, and whether their country can be both a democratic country and a Muslim or Jewish country. We find that large majorities of Muslims in Bangladesh, Indonesia, Malaysia and Nigeria believe sharia, or Islamic law, should be the official law for Muslims in their country. Much smaller shares of Muslims in Israel and Turkey agree. Among Israelis who are Jewish, about a third support making halakha, or Jewish law, the state law for Jews in Israel. When it comes to whether states can have a religious character and be a democracy: Majorities of Bangladeshis, Indonesians, Malaysians, Tunisians and Turks say their country can be both a democracy and a Muslim state. A minority of Nigerians think Nigeria can be both democratic and Muslim. A majority of Israelis think Israel can be both democratic and Jewish – though Jewish Israelis are more than twice as likely as Muslim Israelis to say this. Should Muslims be governed by sharia? Support for Islamic religious law, also known as sharia, is widespread in several of the Muslim-majority countries surveyed. Sharia, or Islamic law, offers moral and legal guidance for nearly all aspects of life for Muslims, from marriage and divorce, to inheritance, contracts and criminal punishments. Sharia in its broadest definition refers to the ethical principles set down in Islam’s holy book (the Quran) and by examples of actions by the Prophet Muhammad (sunna). The Islamic jurisprudence that comes out of the human exercise of codifying and interpreting these principles is known as fiqh. Muslim scholars and jurists continue to debate the boundary between sharia and fiqh as well as other aspects of Islamic law. About nine-in-ten Muslims in Bangladesh, Indonesia and Malaysia say they favor a legal system in which Muslims are bound by Islamic law. Roughly three-quarters of Nigerian Muslims agree. At least half of Muslims in each of these countries say they strongly favor making sharia the official law for those who share their religion. Israeli Muslims, who make up about a fifth of their country’s population, are evenly split on the question: 46% favor making sharia the official law for Muslims in Israel, while 45% oppose this. An additional 9% did not answer the question. Over 90% of Turkey’s population is Muslim. Yet only about a third of Turkish Muslims (32%) favor granting official status to Islamic law. Almost half – a 48% plurality – say they strongly oppose making sharia the law for Muslims in their country. Support for making sharia the official law for Muslims is somewhat correlated with religiousness. Muslim populations with higher rates of daily prayer are more in favor of making sharia the law for Muslims in their country. For example, among Malaysian Muslims, 90% say they pray at least daily, and 93% are in favor of making sharia the official law. Meanwhile, in Israel, 58% of Muslims pray at least daily, and 46% support sharia. Among Muslims in Israel and Turkey, opinions vary by age. In both countries, Muslims ages 50 and older are more likely than those ages 18 to 34 to favor making sharia the official law for Muslims. In Turkey, about four-in-ten adults with lower levels of education believe sharia should be the law for Muslims. Only 22% of Turks with higher levels of education agree. Also in Turkey, Muslim supporters of the governing Justice and Development Party are more than twice as likely as Muslims who don’t support the party to favor a legal system based on sharia (55% vs. 20%). (For more on religion and governance in Turkey, read our October report: “Turks Lean Negative on Erdoğan, Give National Government Mixed Ratings”) Should halakha be the law for Jews? In Israel, the world’s only majority-Jewish country, we asked Jews whether halakha – the traditional set of rules and regulations that govern Jewish life – should be the state law for people who share their religion. Halakha, or Jewish law, refers to the set of rules and practices that govern Jewish life. They originate from the Torah (the first five books of the Hebrew Bible), the Oral Torah, other Jewish scripture and their interpretations by Jewish scholars over the years. There are halakhic laws to regulate how Jews pray, celebrate holidays, work, eat, dress and conduct their relationships with other Jews and non-Jews. Attitudes towards halakha generally follow the spectrum of religious observance: Haredim and other Orthodox Jews consider it essential to follow these rules, while less religious Jews tend to oppose enforcing halakha. About a third of Israeli Jews say they favor a legal system for Jews based on Jewish law, while six-in-ten or so oppose such a system. A plurality of 37% strongly oppose being legally bound by halakha. Jews in Israel differ significantly in their views of halakha: Haredi (“ultra-Orthodox”) and Dati (“religious”) Jews are significantly more likely to favor making halakha the official law for Jews in Israel than either Masorti (“traditional”) Jews or Hiloni (“secular”) Jews. About nine-in-ten Haredi and Dati Jews express this opinion, while 20% of Masorti Jews and 4% of Hiloni Jews agree. Haredim and Datiim, as well as Hilonim, feel strongly on the subject: Half of Haredim and Datiim strongly favor a legal system for Jews based on Jewish law, while 70% of Hilonim strongly oppose this. Similarly, more than eight-in-ten Israeli Jews who pray at least daily say halakha should be the law for Jews in Israel. Only 13% of Jews who pray less often express this opinion. Younger Jews (ages 18 to 34) are twice as likely as Jews ages 50 and older to say they strongly favor making halakha the official law for Jews in Israel (24% vs. 12%). Only a quarter of Israeli Jews with a postsecondary education favor making halakha the state law for

5. What role should religion play in Muslim- and Jewish-majority countries? Read More »

Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

The past year was a whirlwind for CIOs and CISOs, marked by the rapid expansion of enterprise AI, persistent cyber threats, and the growing menace of deepfakes. Added to this tumult were emerging threats from activist hackers, who found innovative ways to infiltrate corporate data systems, banking networks, and social media platforms. Throw in bad actors capitalizing on a heated political climate into this mix, and that’s a lot of challenges for any CIO or CISO to handle. Yet, there are likely more challenges to come. As I write this, the world is learning about DeepSeek, the new advanced AI model developed by High-Flyer, a Chinese hedge fund. The open-source advanced AI architecture has already been attacked and is also being viewed as a conduit for new data exploitations and cybersecurity attacks. AI in enterprise 2024 witnessed unprecedented growth in enterprise AI, transforming far beyond chatbots and automated support. Cloud providers such as Microsoft, Google, and AWS heavily invested in AI infrastructure, as did venture capital funds, producing a wide range of solutions for enterprises to jump in “feet first” with AI apps that automate different critical tasks, with data agents leading the way. Other uses for enterprise AI included data collection, analysis, customer service, and risk management. source

Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025 Read More »

How Cos. Can Respond To CFPB Digital Asset Safeguard Plan

By Allison Raley ( January 30, 2025, 10:22 AM EST) — The regulation of digital assets, such as cryptocurrencies and video game payment mechanisms, has long presented challenges due to a lack of clear regulatory guidelines. Agencies have grappled with defining their oversight boundaries while these markets rapidly evolve…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

How Cos. Can Respond To CFPB Digital Asset Safeguard Plan Read More »

Ex-Google, Apple engineers launch unconditionally open source Oumi AI platform that could help to build the next DeepSeek

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More If it wasn’t clear before, it’s definitely very clear now: Open source really does matter for AI. The success of DeepSeek-R1 has substantively proven there is a need and demand for open-source AI. But what exactly is open-source AI? For Meta and its Llama models, it means free access to use the model, with some conditions. DeepSeek is available under a permissive open-source license  providing significant access to its architecture and capabilities. However, the specific training code and detailed methodologies, particularly those involving reinforcement learning (RL) techniques like Group Relative Policy Optimization (GRPO), have not been publicly disclosed. This omission limits the community’s ability to fully understand and replicate the model’s training process. What neither DeepSeek nor Llama enables, however, is full unconditional access to all the model code, including weights as well as training data. Without all that information, developers can still work with the open model but they don’t have all the necessary tools and insights to understand how it really works and more importantly how to build an entirely new model. That’s a challenge that a new startup led by former Google and Apple AI veterans aims to solve. Launching today, Oumi is backed by an alliance of 13 leading research universities including Princeton, Stanford, MIT, UC Berkeley, University of Oxford, University of Cambridge, University of Waterloo and Carnegie Mellon. Oumi’s founders raised $10 million, a modest seed round they say meets their needs. While major players like OpenAI contemplate $500 billion investments in massive data centers through projects like Stargate, Oumi is taking a radically different approach. The platform provides researchers and developers with a complete toolkit for building, evaluating and deploying foundation models. “Even the biggest companies can’t do this on their own,” Oussama Elachqar, cofounder of Oumi and previously a machine learning engineer at Apple, told VentureBeat. “We were effectively working in silos within Apple, and there are many other silos happening across the industry. There has to be a better way to develop these models collaboratively.” What open-source models like DeepSeek and Llama are missing Oumi CEO and former Google Cloud AI senior engineering manager Manos Koukoumidis told VentureBeat that researchers consistently tell him AI experimentation has become extremely complex. While today’s open models are a step forward, it’s not enough. Koukoumidis explained that with current “open” AI models like DeepSeek-R1 and Llama, an organization can use the model and deploy it on their own. What’s missing is that anyone else who wants to build on the model doesn’t know exactly how it was built. The Oumi founders believe this lack of transparency is a major hindrance to collaborative AI research and development. Even a project like Llama requires a significant amount of effort from researchers to figure out how to reproduce and build upon the work.  How Oumi works to open AI for enterprise users, researchers and everyone else The Oumi platform works by providing an all-in-one environment that streamlines the complex workflows involved in building AI models.  Koukoumidis explained that to build a foundation model, there are typically 10 or more steps that need to be done, often in parallel. Oumi integrates all necessary tools and workflows into a unified environment, eliminating the need for researchers to piece together and configure various open-source components. Key technical features include: Support for models ranging from 10M to 405B parameters Implementation of advanced training techniques including SFT, LoRA, QLoRA and DPO Compatibility with both text and multimodal models Built-in tools for training data synthesis and curation using LLM judges Deployment options through modern inference engines like vLLM and SGLang Comprehensive model evaluation across standard industry benchmarks “We don’t have to deal with the open-source development hell of figuring out what you can combine and what works well,” Koukoumidis explained. The platform allows users to start small, using their own laptops for initial experiments and model training. As users progress, they can then scale up to larger compute resources, such as university clusters or cloud providers, all within the same Oumi environment. You don’t need massive training infrastructure to build an open model  One of the big surprises with DeepSeek-R1 is the fact that it was apparently built with a fraction of the resources that Meta or OpenAI use to build their models. As OpenAI and others invest billions in centralized infrastructure, Oumi is betting on a distributed approach that could dramatically reduce costs. “The idea that you need hundreds of billions [of dollars] for AI infrastructure is fundamentally flawed,” Koukoumidis said. “With distributed computing across universities and research institutions, we can achieve similar or better results at a fraction of the cost.” The initial focus for Oumi is to build out the open-source ecosystem of users and development. But that’s not all the company has planned. Oumi plans to develop enterprise offerings to help businesses deploy these models in production environments. source

Ex-Google, Apple engineers launch unconditionally open source Oumi AI platform that could help to build the next DeepSeek Read More »

How AI Can Help (Or Deceive) Gamblers

Thanks to the legalization of gambling-related activities in many parts of the world, the betting industry is booming. The field’s current market size is over a billion US dollars, according to Statista, including both on-site and online betting operations.  As the market flourishes, a growing number of betters are hoping that AI will help them beat the odds.  Playing the Odds  AI’s role in gambling is still relatively new, with operators only just beginning to explore its potential impact on backend systems and player platforms, says Yoel Zuckerberg, chief product officer at Soft2Bet, an online casino and sports book software provider. In an online interview, he notes that most industry players currently encounter AI only in limited forms within games, yet he believes that the technology’s role is likely to expand. “In the near future, AI is expected to play an increasingly central role in gaming platforms, enhancing personalization and engagement.”  AI’s strongest attribute lies in its ability to enhance personalization and interactivity, Zuckerberg says. “By tracking player behaviors, preferences, and patterns, AI can deliver tailored, bespoke gameplay,” he explains. “Integrating gamification elements, such as rewards and challenges, AI can also foster stronger customer loyalty and engagement.”  Related:Speech-to-Speech AI: Empowering a More Connected World Bettors can turn to AI to uncover patterns that provide valuable insights for making informed decisions, says Marin Cristian-Ovidiu, CEO of Online Games.IO in an online interview. However, when it comes to games of pure chance such as slots or roulette, AI has little to offer, since those outcomes are completely random.  Many gambling operators are understandably wary of AI, worried that the technology could soon shift the balance of player engagement and strategy, Cristian-Ovidiu says. For gamblers eager to explore AI’s gambling potential, he recommends starting, and becoming familiar with, data analytics platforms.  AI is rapidly transforming the gambling industry, offering both opportunities and challenges, says Christian Nzouatoum, founder of Nzouat, a firm specializing in small business AI and software architecture. “In areas like sports betting, poker, and blackjack, AI can be a powerful tool for gamblers, allowing them to analyze massive datasets and make informed decisions based on predictive models,” he observes via email. “For example, in sports betting, AI can process player statistics, team dynamics, and even external factors like weather conditions, to offer insights that go beyond what a human could easily calculate.” In poker, Nzouatoum notes, “AI tools can assess an opponent’s behavior and adjust strategies accordingly.”  Related:Exploring the Positive Impacts of AI for Social Equity On the flip side, AI has only limited applicability in games of pure chance, such as slot machines and roulette, where outcomes are entirely unpredictable, Zuckerberg says. “However, it can still add value by personalizing the player experience, customizing rewards, and creating engagement-enhancing features within virtual slots and similar games.”  Other Concerns  Gambling organizations, including casinos and sportsbooks, are keeping a close eye on AI developments. “They understand the advantages but are also concerned about maintaining fair play and game integrity,” says video game blogger Dane Nk, in an online interview. For individuals looking to dive into AI-assisted gambling, Nk suggests starting with data analysis tools geared toward betters, of which there are many. “They can offer valuable insights but remember that the human touch — skills and game knowledge — should never be overlooked.”  Since AI’s regulatory framework remains largely undefined, with many jurisdictions lacking specific guidelines for its use, businesses — including gambling operators — are currently operating in a gray area. “To mitigate the risks, companies should stay informed on regulatory developments and strengthen internal policies to ensure compliance,” Zuckerberg advises.  Related:China’s DeepSeek Dethrones ChatGPT as US Tech Stocks Plunge Gambling Addiction Detection   Recent studies reveal a complex outlook in which AI is both a potential savior and a cunning manipulator in the world of gambling and addiction, says Christian Perry, CEO of Undetectable AI, a firm offering AI detection and conversion technology.  The key is balance and responsible use, Perry states in an email interview. He believes that casinos can, and should, identify problematic gamblers using AI, and take every possible measure to prevent exploiting them in any way. “In person and online casinos should acknowledge the benefits and risks of using AI,” he says.  Betting on the Future  AI is transforming not only the player experience but also gambling enterprises’ operational efficiency and strategic approach. “As we move forward, we anticipate that AI will play an integral role in shaping a more responsible, player-centric gaming environment,” Zuckerberg says. “It’s essential for [gambling] organizations to prioritize ethical AI practices, stay updated on regulations, and maintain a strong focus on transparency.”  source

How AI Can Help (Or Deceive) Gamblers Read More »

Trump's DEI Cuts Threaten USPTO Innovation Goals

By Theresa Schliep ( January 31, 2025, 1:26 PM EST) — President Donald Trump’s recent actions to purge diversity programs from the federal government and private sector could undermine one of the top objectives of the U.S. Patent and Trademark Office in recent years: expanding access to innovation…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Trump's DEI Cuts Threaten USPTO Innovation Goals Read More »