OpenAI CEO responds to report of GPT-5 Orion coming later this year: ‘Fake news out of control’

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The Verge last night published an exclusive and seemingly well researched and sourced report (it’s great in my opinion, read it here) from journalists Kylie Robison and Tom Warren stating that OpenAI plans to launch another new frontier AI model, codenamed Orion — which may or may not be GPT-5 — by December. Yet two hours after the article went live, Sam Altman, OpenAI’s co-founder and CEO, took to X to respond by replying directly to Robison’s share of the article, writing “fake news out of control.” Altman hasn’t elaborated much since then from what I’ve seen, and the response is notably not exactly a direct denial of the claims — he didn’t write “No” or “this is false,” much less describe which part of the detailed article is wrong: is OpenAI not working on a new frontier model called Orion? That would contradict prior reporting from outlets including The Information that it does have such an effort internally — which to my knowledge, OpenAI never directly denied. Is it not planning to release later this year? But it is clearly an attempt to push back on the reporting as it stands. It’s an interesting quasi-denial given how precise The Verge report is, noting specific details about Orion’s supposed release plans and the fact that it appears to be geared toward enterprise customers and possibly would be served up through an application programming interface (API) only at first: “Unlike the release of OpenAI’s last two models, GPT-4o and o1, Orion won’t initially be released widely through ChatGPT. Instead, OpenAI is planning to grant access first to companies it works closely with in order for them to build their own products and features, according to a source familiar with the plan. Another source tells The Verge that engineers inside Microsoft — OpenAI’s main partner for deploying AI models — are preparing to host Orion on Azure as early as November. While Orion is seen inside OpenAI as the successor to GPT-4, it’s unclear if the company will call it GPT-5 externally.“ OpenAI’s last release of a new frontier model — o1 preview and o1-mini — occurred in early September, a little more than a month ago. Yet the wider reception of these large language models (LLMs) has been largely muted, in part because they are expensive for both the company and developers to operate, and also because they are of a new “reasoning” architecture and are more limited in many ways than OpenAI’s GPT family of models, unable at this time to accept file uploads, or to generate and analyze imagery. A new frontier model would help OpenAI capture the limelight again from rivals including Anthropic, who just this week unveiled a promising new agentic mode called “Computer Use” and new version of its Claude family of LLMs. OpenAI is not in ppor Whether OpenAI does end up releasing a new frontier model later this year or not, we’ll be following closely. For now, it seems, fans of the company and its models shouldn’t get their hopes up too soon. source

OpenAI CEO responds to report of GPT-5 Orion coming later this year: ‘Fake news out of control’ Read More »

Social Media MDL Judge Rips Meta, AGs' Agency Doc Fight

By Dorothy Atkins ( October 25, 2024, 9:24 PM EDT) — A California federal judge Friday slammed counsel for Meta and dozens of state attorneys general during a contentious hearing in multidistrict litigation over claims social media is addictive for not reaching agreements on Meta’s demands for documents from 275 state agencies, telling both sides’ attorneys, “we should’ve never gotten here.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Social Media MDL Judge Rips Meta, AGs' Agency Doc Fight Read More »

Fintech Reckoning: Will Incumbents Pick In-house AI over Startups?

As financial institutions explore the potential efficiencies of in-house AI deployment, the trend could change the role fintech startups play as purveyors of innovation. The nimble, boundary-pushing nature of startups often means they dabble in novel ideas before incumbents. The more that organizations great and small inject AI into their ecosystems, does it reduce the interest on startups to drive new efficiencies and concepts? In this episode of DOS Won’t Hunt, Angela Friend, vice president of data science and AI with DailyPay; Adnan Masood, chief AI architect with UST; and John Lin, investor with F-Prime Capital, discuss how the spread of AI among incumbent financial institutions might affect their pursuit of innovation through startups. As efficiencies of AI continue to be touted, do incumbent financial institutions still need smaller, nimble startups to catalyze innovation? What are some common pain points financial institutions want to solve via technology? And how might startups try to speak to those pain points in ways large incumbents cannot? Listen to the full podcast here. source

Fintech Reckoning: Will Incumbents Pick In-house AI over Startups? Read More »

Telegram 將根據有效的法律請求提供 IP 和電話號碼等數據

加密即時通訊工具「電報」Telegram CEO Pavel Durov 表示,該公司將根據有效的法律請求向有關當局提供用戶的 IP 地址和電話號碼。 Durov 在 Telegram 上發帖表示,為阻止犯罪分子濫用該平台,它修改了服務條款。此舉是在他於法國被捕不到一個月後做出的,他面臨共謀傳播兒童色情材料等的指控。總部位於阿聯酋的 Telegram 以內容審核寬松著稱,通常對各國政府的刪除請求不予理睬,也經常無視犯罪嫌疑人的信息披露請求。 與此同時 Telegram 組建了專業的內容審核團隊並借助 AI 技術大幅度提升 Telegram 搜索的安全性,目前所有已經被識別到的問題內容都無法通過搜索訪問。 LinkedIn Email Facebook Twitter WhatsApp source

Telegram 將根據有效的法律請求提供 IP 和電話號碼等數據 Read More »

The enterprise verdict on AI models: Why open source will win

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The enterprise world is rapidly growing its usage of open source large language models (LLMs), driven by companies gaining more sophistication around AI – seeking greater control, customization, and cost efficiency.  While closed models like OpenAI’s GPT-4 dominated early adoption, open source models have since closed the gap in quality, and are growing at least as quickly in the enterprise, according to multiple VentureBeat interviews with enterprise leaders. This is a change from earlier this year, when I reported that while the promise of open source was undeniable, it was seeing relatively slow adoption. But Meta’s openly available models have now been downloaded more than 400 million times, the company told VentureBeat, at a rate 10 times higher than last year, with usage doubling from May through July 2024. This surge in adoption reflects a convergence of factors – from technical parity to trust considerations – that are pushing advanced enterprises toward open alternatives. “Open always wins,” declares Jonathan Ross, CEO of Groq, a provider of specialized AI processing infrastructure that has seen massive uptake of customers using open models. “And most people are really worried about vendor lock-in.” Even AWS, which made a $4 billion investment in closed-source provider Anthropic – its largest investment ever – acknowledges the momentum. “We are definitely seeing increased traction over the last number of months on publicly available models,” says Baskar Sridharan, AWS’ VP of AI & Infrastructure, which offers access to as many models as possible, both open and closed source, via its Bedrock service.  The platform shift by big app companies accelerates adoption It’s true that among startups or individual developers, closed-source models like OpenAI still lead. But in the enterprise, things are looking very different. Unfortunately, there is no third-party source that tracks the open versus closed LLM race for the enterprise, in part because it’s near impossible to do: The enterprise world is too distributed, and companies are too private for this information to be public. An API company, Kong, surveyed more than 700 users in July. But the respondents included smaller companies as well as enterprises, and so was biased toward OpenAI, which without question still leads among startups looking for simple options. (The report also included other AI services like Bedrock, which is not an LLM, but a service that offers multiple LLMs, including open source ones — so it mixes apples and oranges.) Image from a report from the API company, Kong. Its July survey shows ChatGPT still winning, and open models Mistral, Llama and Cohere still behind. But anecdotally, the evidence is piling up. For one, each of the major business application providers has moved aggressively recently to integrate open source LLMs, fundamentally changing how enterprises can deploy these models. Salesforce led the latest wave by introducing Agentforce last month, recognizing that its customer relationship management customers needed more flexible AI options. The platform enables companies to plug in any LLM within Salesforce applications, effectively making open source models as easy to use as closed ones. Salesforce-owned Slack quickly followed suit. Oracle also last month expanded support for the latest Llama models across its enterprise suite, which includes the big enterprise apps of ERP, human resources, and supply chain. SAP, another business app giant, announced comprehensive open source LLM support through its Joule AI copilot, while ServiceNow enabled both open and closed LLM integration for workflow automation in areas like customer service and IT support. “I think open models will ultimately win out,” says Oracle’s EVP of AI and Data Management Services, Greg Pavlik. The ability to modify models and experiment, especially in vertical domains, combined with favorable cost, is proving compelling for enterprise customers, he said. A complex landscape of “open” models While Meta’s Llama has emerged as a frontrunner, the open LLM ecosystem has evolved into a nuanced marketplace with different approaches to openness. For one, Meta’s Llama has more than 65,000 model derivatives in the market. Enterprise IT leaders must navigate these, and other options ranging from fully open weights and training data to hybrid models with commercial licensing. Mistral AI, for example, has gained significant traction by offering high-performing models with flexible licensing terms that appeal to enterprises needing different levels of support and customization. Cohere has taken another approach, providing open model weights but requiring a license fee – a model that some enterprises prefer for its balance of transparency and commercial support. This complexity in the open model landscape has become an advantage for sophisticated enterprises. Companies can choose models that match their specific requirements – whether that’s full control over model weights for heavy customization, or a supported open-weight model for faster deployment. The ability to inspect and modify these models provides a level of control impossible with fully closed alternatives, leaders say. Using open source models also often requires a more technically proficient team to fine-tune and manage the models effectively, another reason enterprise companies with more resources have an upper hand when using open source. Meta’s rapid development of Llama exemplifies why enterprises are embracing the flexibility of open models. AT&T uses Llama-based models for customer service automation, DoorDash for helping answer questions from its software engineers, and Spotify for content recommendations. Goldman Sachs has deployed these models in heavily regulated financial services applications. Other Llama users include Niantic, Nomura, Shopify, Zoom, Accenture, Infosys, KPMG, Wells Fargo, IBM, and The Grammy Awards.  Meta has aggressively nurtured channel partners. All major cloud providers embrace Llama models now. “The amount of interest and deployments they’re starting to see for Llama with their enterprise customers has been skyrocketing,” reports Ragavan Srinivasan, VP of Product at Meta, “especially after Llama 3.1 and 3.2 have come out. The large 405B model in particular is seeing a lot of really strong traction because very sophisticated, mature enterprise customers see the value of being able to switch between multiple models.” He said customers

The enterprise verdict on AI models: Why open source will win Read More »

UK Litigation Roundup: Here's What You Missed In London

By Max Austin ( October 25, 2024, 9:03 PM BST) — This past week in London has seen the Competition and Markets Authority take action against a mattress retailer after it was caught pressuring its customers with misleading discounts, Lenovo and Motorola target ZTE Corporation with a patents claim, Lloyds Bank hit by another claim relating to the collapse of Arena Television and U.K. tax authority HMRC sued by the director of an electronics company that evaded millions of pounds in VAT…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

UK Litigation Roundup: Here's What You Missed In London Read More »

How Cloud-Focused Upskilling Drives Business Growth

The cloud skills gap is one of the most significant challenges facing the tech industry today. IT leaders are grappling with a shortage of cloud-competent professionals, which is slowing down innovation and preventing organizations from fully leveraging cloud technologies.   To remain competitive in a rapidly evolving landscape, it’s critical for IT leaders to address this gap by prioritizing upskilling initiatives that equip their teams with the expertise needed to meet cloud demands.  Recent data shows that roughly 90% of tech workers under the age of 25 are considering new career opportunities, underscoring the need for skilled cloud professionals. Without strategic intervention, organizations risk missing out on the talent and innovation required to thrive in today’s cloud-driven world.  The Case for Cloud-Focused Upskilling  By providing both new and current employees with the skills to take advantage of emerging IT practices, upskilling can help bridge the cloud skills gap. Importantly, the prioritization of upskilling helps IT leaders meet cloud demands while supporting the growth of the next generation of tech workers.  But despite the clear need for upskilling, fewer than 20% of company leaders report progress on these initiatives. Many struggle due to limited resources, insufficient buy-in from leadership, and low employee motivation. Combined with the rapid evolution of cloud technologies, this creates a bottleneck in cloud deployments, making it even more difficult to meet demand.  Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI The solution lies in making upskilling a higher priority. IT leaders must advocate for the value of upskilling, not just as a means of solving the skills gap, but as a driver of future growth and innovation.  Champion Upskilling to Strengthen Your Organization  Building a successful upskilling program takes more than just providing training — it requires a cultural shift within the organization that embraces continuous learning and development. Here’s how to get started.  1. Educate your organization on upskilling benefits  To gain internal support, it’s crucial to highlight the long-term benefits of upskilling to employees and leadership alike. Upskilling can lead to improved efficiency, stronger data security, and more seamless cloud integration — advantages that can keep your organization ahead of the curve in the competitive tech landscape.  By showcasing how upskilling empowers teams to leverage cloud technology more effectively, you can make a compelling case for prioritizing training initiatives.  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness 2. Secure C-suite buy-in for cloud upskilling programs  The key to any successful initiative is leadership buy-in. Demonstrating the potential return on investment of upskilling initiatives can help secure the support you need from the C-suite.   Focus on how upskilling can lower operational costs, boost productivity, and create a more agile workforce capable of driving innovation. Citing success stories from organizations that have benefited from upskilling can also provide valuable examples of its impact.  3. Focus on Gen Z talent to drive cloud innovation  Gen Z professionals bring a unique advantage to cloud upskilling initiatives — they are the first generation to have grown up with digital technologies and are naturally more adaptable to new tools and platforms.   Engaging younger workers through upskilling programs not only helps close the cloud skills gap but also ensures your organization is nurturing the next generation of tech leaders.  4. Develop a supportive work culture for upskilling  Gen Z professionals are drawn to organizations that prioritize growth and development. Offering clear pathways for cloud-related upskilling not only enhances employee engagement but also ensures your organization retains the talent needed to stay competitive in a cloud-first world.  Related:The Impact of AI Skills on Hiring and Career Advancement Embrace a Cloud-First Future With Upskilling  Closing the cloud skills gap is no small task, but with a clear focus on upskilling, IT leaders can equip their teams with the expertise needed to thrive in a cloud-first world. By prioritizing training initiatives and fostering a supportive work environment, organizations can bridge the gap, drive innovation, and ensure sustainable growth in the ever-evolving tech landscape.  source

How Cloud-Focused Upskilling Drives Business Growth Read More »

POS Terminals Explained By Experts: A Complete Guide For 2024

Retail tech has come a long way since the invention of cash registers. One point-of-sale system can now manage inventory, track sales, and accept payments — all on the go with one POS terminal. Affordable mobile terminals also allow businesses to adapt to the constantly evolving in-person payment methods such as buy now, pay later (BNPL) and QR codes. To bring you insights into this popular payment technology, I leveraged my seven years of experience reviewing POS systems and payment systems, my degree in financial management, and certificate payments technology. What is a POS Terminal? Square Terminal, one of the most popular POS terminals on the market. Image: Square A point-of-sale, or POS terminal, is a compact business hardware that comes with built-in POS software and a card reader to accept cash, card, and other forms of non-cash transactions like gift cards. Terminals also typically have, or are connected to, cloud-based POS software to update business information such as inventory levels and sales in real-time. History of Cash Registers and POS Systems The first cash register by James Ritty in 1879 can record transactions without error. Image: Mathematical Association of America Before the POS terminal, there was the cash register. Invented by James Ritty, a saloon owner from Ohio, in 1879, the cash register was designed to accurately record transactions to help users with their bookkeeping. The National Cash Register (NCR) eventually purchased the invention in 1884. During this time, electric motors, cash drawers, and paper rolls for receipts were added to the cash register. Did you know? Tech monopolies are not just a 21st-century problem. In 1921, the U.S. Government filed suits against NCR under the Sherman Antitrust Act. At the time, NCR controlled 95% of the cash register market. IBM introduced the first restaurant computer-based POS system in 1973, which came with an electronic cash register (ECR) and a client-server on the back end. But it wasn’t until 1979 that Visa and Mastercard released the magstripe technology for accepting credit card payments at the point of sale. When the internet became available, Europay, Visa, and Mastercard also developed EMV chips on credit cards in 1993, significantly improving credit card processing. Soon after, pioneers like PayPal, Verifone, and Ingenico developed mobile card readers, allowing businesses to accept payments anywhere. Eventually, PayPal transitioned away from its basic mobile card readers and, along with other competitors like Square, introduced POS terminals that offer better security and more advanced payment capabilities. As POS technology advanced, leading payment processors launched what are now called “Smart Terminals” that combined advanced POS features. How can POS terminals improve business operations? The advent of the internet enabled POS systems and terminals to evolve into a more efficient business solution. Nowadays, terminals offer quicker and more secure payment processing, customer engagement tools, better inventory control, and additional payment channels. Read more: Best POS systems for small business How do POS terminals work? The role of the POS terminal in the POS ecosystem is to compute the total cost of the transaction, accept payment, and keep a record of the transaction. During checkout At checkout, customers bring products to the POS terminal for purchase. The seller scans the product barcode with a barcode scanner, which generates a corresponding SKU number within the POS system’s inventory catalog. The product information, including the list price, is then displayed, and the user enters the quantity to get the total cost. When collecting payment Once the total price is displayed, the customer presents their preferred payment method. If cash or other non-card payments: The seller enters the tender amount to prompt the cash drawer. If card: the seller will either swipe, insert, or tap the card in the area where the card reader is located in the POS terminal. If mobile and other digital payment: the customer will present proof of payment on their mobile device for the user to record the confirmation code In the back end, a payment authorization request is initiated at this point. The transaction and payment data is encrypted and transmitted by the merchant’s payment processor, then sent to the relevant financial institutions for verification. If the customer’s bank confirms that funds are available, the transaction will be authorized, and payment approval will be displayed on the POS terminal. Otherwise, a declined payment notice will be displayed, and the customer will be asked to provide a different payment source. Logging the transaction Finally, a receipt for the completed transaction will be generated and provided to the customer. A copy of the receipt is kept by the user. For card transactions, a transaction receipt is generated in triplicate, and one is given to the customer. At the back end, the POS system also records the sale and adjusts the available inventory in real-time. Read more: POS deployment checklist What are the key features to look for in a POS terminal? The POS terminal is a combination of software and hardware components. Hardware setup Hardware is an essential feature of a POS terminal. For it to complete its task, the POS terminal should have, at minimum: A computer-operated device that runs the POS software. A card reader for accepting card payments (swipe, tap, or dip). A thermal printer to generate receipts, or other means of delivering receipts. You may also want to consider: A barcode scanner to scan barcodes that pulls up product information. A cash drawer (for countertop terminals) for storing cash and other non-card payments if you plan to accept cash. Payment gateway The payment gateway is a software component embedded in the POS system. This is the checkout window on the system display where the user enters the customer’s payment information. The payment gateway will display the available payment methods depending on your payment processing settings. It is also the payment gateway’s role to encrypt the transaction data before sending it to the payment processor for authentication. Payment processor The payment processor connects to the card network and other financial institutions that authorize the transfer

POS Terminals Explained By Experts: A Complete Guide For 2024 Read More »

DeepMind and Hugging Face release SynthID to watermark LLM-generated text

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google DeepMind and Hugging Face have just released SynthID Text, a tool for marking and detecting text generated by large language models (LLMs). SynthID Text encodes a watermark into AI-generated text in a way that helps determine if a specific LLM produced it. More importantly, it does so without modifying how the underlying LLM works or reducing the quality of the generated text.  The technique behind SynthID Text was developed by researchers at DeepMind and presented in a paper published in Nature on Oct. 23. An implementation of SynthID Text has been added to Hugging Face’s Transformers library, which is used to create LLM-based applications. It is worth noting that SynthID is not meant to detect any text generated by an LLM. It is designed to watermark the output for a specific LLM.  Using SynthID does not require retraining the underlying LLM. It uses a set of parameters that can configure the balance between watermarking strength and response preservation. An enterprise that uses LLMs can have different watermarking configurations for different models. These configurations should be stored securely and privately to avoid being replicated by others.  For each watermarking configuration, you must train a classifier model that takes in a text sequence and determines whether it contains the model’s watermark or not. Watermark detectors can be trained with a few thousand examples of normal text and responses that have been watermarked with the specified configuration. We’ve open sourced @GoogleDeepMind‘s SynthID, a tool that allows model creators to embed and detect watermarks in text outputs from their own LLMs. More details published in @Nature today: https://t.co/5Q6QGRvD3G — Sundar Pichai (@sundarpichai) October 23, 2024 How SynthID Text works Watermarking is an active area of research, especially with the rise and adoption of LLMs in different fields and applications. Companies and institutions are looking for ways to detect AI-generated text to prevent mass misinformation campaigns, moderate AI-generated content, and prevent the use of AI tools in education. Various techniques exist for watermarking LLM-generated text, each with limitations. Some require collecting and storing sensitive information, while others require computationally expensive processing after the model generates its response. SynthID uses “generative modeling,” a class of watermarking techniques that do not affect LLM training and only modify the sampling procedure of the model. Generative watermarking techniques modify the next-token generation procedure to make subtle, context-specific changes to the generated text. These modifications create a statistical signature in the generated text while maintaining its quality. A classifier model is then trained to detect the statistical signature of the watermark to determine whether a response was generated by the model or not. A key benefit of this technique is that detecting the watermark is computationally efficient and does not require access to the underlying LLM. SyntID Text process (source: Nature) SynthID Text builds on previous work on generative watermarking and uses a novel sampling algorithm called “Tournament sampling,” which uses a multi-stage process to choose the next token when creating watermarks. The watermarking technique uses a pseudo-random function to augment the generation process of any LLM such that the watermark is imperceptible to humans but is visible to a trained classifier model. The integration into the Hugging Face library will make it easy for developers to add watermarking capabilities to existing applications. To demonstrate the feasibility of watermarking in large-scale production systems, DeepMind researchers conducted a live experiment that assessed feedback from nearly 20 million responses generated by Gemini models. Their findings show that SynthID was able to preserve response qualities while also remaining detectable by their classifiers.  According to DeepMind, SynthID-Text has been used to watermark Gemini and Gemini Advanced.  “This serves as practical proof that generative text watermarking can be successfully implemented and scaled to real-world production systems, serving millions of users and playing an integral role in the identification and management of artificial-intelligence-generated content,” they write in their paper. Limitations According to the researchers, SynthID Text is robust to some post-generation transformations such as cropping pieces of text or modifying a few words in the generated text. It is also resilient to paraphrasing to some degree.  However, the technique also has a few limitations. For example, it is less effective on queries that require factual responses and doesn’t have room for modification without reducing the accuracy. They also warn that the quality of the watermark detector can drop considerably when the text is rewritten thoroughly. “SynthID Text is not built to directly stop motivated adversaries from causing harm,” they write. “However, it can make it harder to use AI-generated content for malicious purposes, and it can be combined with other approaches to give better coverage across content types and platforms.” source

DeepMind and Hugging Face release SynthID to watermark LLM-generated text Read More »