Shojib

AI wins another Nobel, this time in Chemistry: Google DeepMinders Hassabis and Jumper awarded for AlphaFold

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A trio of scientists consisting of Demis Hassabis, co-founder and CEO of Google’s AI division DeepMind, as well as John Jumper, Senior Research Scientist at Google DeepMind and David Baker of the University of Washington have been awarded the 2024 Nobel Prize in Chemistry for their groundbreaking work in predicting and developing new proteins. The DeepMinders won for AlphaFold 2, an AI system launched in 2020 capable of predicting the 3D structure of proteins from their amino acid sequences. Meanwhile, Baker won for leading a laboratory where the 20 amino acids that form proteins were used to design new ones, including proteins for “pharmaceuticals, vaccines, nanomaterials and tiny sensors,” according to the Nobel committee’s announcement. The award highlights how artificial intelligence is revolutionizing biological science — and comes just one day after what I believe to be the first Nobel Prize awarded to an AI technology, that one for Physics to fellow Google DeepMinder Geoffrey Hinton and Princeton professor John J. Hopfield, for their work in artificial neural networks. The Royal Swedish Academy of Sciences announced the prize as it did with the Physics one, valued at 11 million Swedish kronor (around $1 million USD), split among the laureates — half will go to Baker and the other half divided again in fourths of the total to Hassabis and Jumper. Breakthrough on a biology problem unsolved for half of a century The committee emphasized the unprecedented impact of AlphaFold, describing it as a breakthrough that solved a 50-year-old problem in biology: protein structure prediction, or how to predict the three-dimensional structure of a protein from its amino acid sequence. For decades, scientists knew that a protein’s function is determined by its 3D shape, but predicting how the string of amino acids folds into that shape was incredibly complex. Researchers had attempted to solve this since the 1970s, but due to the vast number of possible folding configurations (known as Levinthal’s paradox), accurate predictions remained elusive. AlphaFold, developed by Google DeepMind, made a breakthrough by using AI to predict the 3D structures of proteins with near-experimental accuracy, meaning that the predictions made by AlphaFold for a protein’s 3D structure are so close to the results obtained from traditional experimental methods—like X-ray crystallography, cryo-electron microscopy, or nuclear magnetic resonance (NMR) spectroscopy—that they are almost indistinguishable. When AlphaFold achieved “near-experimental accuracy,” it was able to predict protein structures with a level of precision that rivaled these methods, typically within an error margin of around 1 Ångström (0.1 nanometers) for most proteins. This means the model’s predictions closely matched the actual structures determined by experimental means, making it a transformative tool for biologists. Hassabis and Jumper’s work, developed at DeepMind’s London laboratory, has transformed the fields of structural biology and drug discovery, offering a powerful tool to scientists worldwide. “AlphaFold has already been used by more than two million researchers to advance critical work, from enzyme design to drug discovery,” Hassabis said in a statement. “I hope we’ll look back on AlphaFold as the first proof point of AI’s incredible potential to accelerate scientific discovery.” AlphaFold’s Global Impact AlphaFold’s predictions are freely accessible via the AlphaFold Protein Structure Database, making it one of the most significant open-access scientific tools available. Over two million researchers from 190 countries have used the tool, democratizing access to cutting-edge AI and enabling breakthroughs in fields as varied as molecular biology, drug development, and even climate science. By predicting the 3D structure of proteins in minutes—tasks that previously took years—AlphaFold is accelerating scientific progress. The system has been used to tackle antibiotic resistance, design enzymes that degrade plastic, and aid in vaccine development, marking its utility in both healthcare and sustainability. John Jumper, co-lead of AlphaFold’s development, reflected on its significance, stating, “We are honored to be recognized for delivering on the long promise of computational biology to help us understand the protein world and to inform the incredible work of experimental biologists.” He emphasized that AlphaFold is a tool for discovery, helping scientists understand diseases and develop new therapeutics at an unprecedented pace. The Origins of AlphaFold The roots of AlphaFold can be traced back to DeepMind’s broader exploration of AI. Hassabis, a chess prodigy, began his career in 1994 at the age of 17, co-developing the hit video game Theme Park, which was released on June 15 that year. After studying computer science at Cambridge University and completing a PhD in cognitive neuroscience, he co-founded DeepMind in 2010, using his understanding of chess to raise funding from famed contrarian venture capitalist Peter Thiel. The company, which specializes in artificial intelligence, was acquired by Google in 2014 for around $500 million USD. As CEO of Google DeepMind, Hassabis has led breakthroughs in AI, including creating systems that excel at games like Go and chess. By 2016, DeepMind had achieved global recognition for developing AI systems that could master the ancient game of Go, beating world champions. It was this expertise in AI that DeepMind began applying to science, aiming to solve more meaningful challenges, including protein folding. The AlphaFold project formally launched in 2018, entering the Critical Assessment of protein Structure Prediction (CASP) competition—a biannual global challenge to predict protein structures. That year, AlphaFold won the competition, outperforming other teams and heralding a new era in structural biology. But the real breakthrough came in 2020, when AlphaFold2 was unveiled, solving many of the most difficult protein folding problems with an accuracy previously thought unattainable. AlphaFold 2’s success marked the culmination of years of research into neural networks and machine learning, areas in which DeepMind has become a global leader. The system is trained on vast datasets of known protein structures and amino acid sequences, allowing it to generalize predictions for proteins it has never encountered—a feat that was previously unimaginable. Earlier this year, Google DeepMind and Isomorphic Labs unveiled AlphaFold 3, the third generation of the model, which

AI wins another Nobel, this time in Chemistry: Google DeepMinders Hassabis and Jumper awarded for AlphaFold Read More »

Cyberus creates the sound of a passwordless future

Passwordless computer security implies a world where you seamlessly use your apps and network services, with automatic identification, verification and authorization, and without any need to manage dozens of passwords and multi-click factorization. But are those types of security checks truly passwordless if a user has to input something anyway, a code from an email or an SMS? Or run a biometric scan, or use two-factor verification? It’s technically not using a “word” but there is still a need for user input to be allowed to pass through into their account. Cyberus Labs, a San Francisco and Warsaw, Poland based startup, has some early users for its passwordless security solution that lives closer to the promise of a transparent and seamless computing future. It uses very short chirps of inaudible sound to identify and verify a user.  The sound can be generated by a phone, or an Alexa-type device, and there are many electronic devices that have a microphone and a speaker of some kind that could be used.  “The sound is very short and doesn’t require much processing power which means it can be easily integrated into edge devices, Internet of Things devices, which have limited processing and memory resources, which makes them easy to hack,” says Jack Wolosewicz, inventor of the audio watermark technology and CEO of Cyberus. Each audio segment is unique and is only valid for milliseconds – making it impossible for any eavesdropper hacker to compromise the security of the system.  In a home setting, logging into a bank account would involve an inaudible audio chirp between the user’s computer and an Alexa, or Google speaker that’s listening out for it. The exchange would be completely transparent and seamless for the user, who now has no need to remember or manage dozens of passwords. Cerberus can be used in a variety of consumer computer security applications but there is a  bigger opportunity in securing billions of future IoT devices. Most are notoriously easy to hack because they have limited computing resources and computer security based on outdated methods.  The latest version of the Cyberus audio watermark technology is squarely aimed at IoT solutions. With 100s of billions of such devices embedded within our future – security is going to be a massive problem and a massive opportunity.  The IoT security solution has to be resource-light because the edge devices will always be constrained by processing power and memory. And that is what describes the Cyberus ELIoT platform, an end-to-end security platform for IoT smart devices with an ultra-light telemetry component, for example the audio chirp is just 32-bits of encrypted data. The Cyberus ELIoT system was recently selected by the city of Katowice, Poland’s 11th largest with a population of more than 300,000, to protect its networks of smart sensors controlling vital city infrastructures such as water supplies and power networks.  Safeguarding vital city infrastructure from hackers is very much top of mind for Katowice’s administration with the outbreak of war in neighboring Ukraine, and the very serious threat from Russia state-funded hackers targeting Poland because of its material support for Ukraine’s armed forces. Defending an attack from Russian hackers would be a great test for the Cyberus technology. A global startup Cyberus is a great example of a global startup, one that is able to take advantage of opportunities anywhere in the world while applying startup best practices.   The founders between them have several decades of Silicon Valley experience plus knowledge and contacts in European markets. Wolosewicz is currently based in San Francisco. His co-founder George Slawek had two decades based in Silicon Valley and New York but now works from London, while a third co-founder, Marek Ostafil, is based in Warsaw.  A global approach means more choices when it comes to funding.  And every Silicon Valley veteran knows the value of keeping hold of equity.  In 2018 Cyberus was selected from among many other startups, by the European Union to receive a large investment from its Horizon innovation program. Horizon’s budget of nearly $64 billion funded a seven-year mission from 2014 to 2020 to invest in research institutions and startups. Over 150,000 grants were made which produced nearly 100,000 research papers and the filing of about 2,500 patents and trademarks. “The Horizon funding has given us tremendous credibility. And the funding is great because it does not require us to give up any equity.  But we did have to meet all of their technical deliverables, which we fully completed in 2020,” says George Slawek, co-founder and head of business development.  With the rise of remote work, startups such as Cyberus will likely become more common because the advantages in terms of funding and hiring cannot be ignored.  source

Cyberus creates the sound of a passwordless future Read More »

Hugging Face’s new tool lets devs build AI-powered web apps with OpenAI in just minutes

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Hugging Face has released an innovative new Python package that allows developers to create AI-powered web apps with just a few lines of code. The tool, called “OpenAI-Gradio,” simplifies the process of integrating OpenAI’s large language models (LLMs) into web applications, making AI more accessible to developers of all skill levels. The release signals a major shift in how companies can leverage AI, reducing development time while maintaining powerful, scalable applications. How developers can create web apps in minutes with OpenAI-Gradio The OpenAI-Gradio package integrates OpenAI’s API with Gradio, a popular interface tool for machine learning (ML) applications. In just a few steps, developers can install the package, set their OpenAI API key, and launch a fully functional web app. The simplicity of this setup allows even smaller teams with limited resources to deploy advanced AI models quickly. For instance, after installing the package with: pip install openai-gradio A developer can write: import gradio as gr import openai_gradio gr.load( name=’gpt-4-turbo’, src=openai_gradio.registry, ).launch() This small amount of code spins up a Gradio interface connected to OpenAI’s GPT-4-turbo model, allowing users to interact with state-of-the-art AI directly from a web app. Developers can also customize the interface further, adding specific input and output configurations or even embedding the app into larger projects. Simplifying AI development for businesses of all sizes Hugging Face’s openai-gradio package removes traditional barriers to AI development, such as managing complex backend infrastructure or dealing with model hosting. By abstracting these challenges, the package enables businesses of all sizes to build and deploy AI-powered applications without needing large engineering teams or significant cloud infrastructure. This shift makes AI development more accessible to a much wider range of businesses. Small and mid-sized companies, startups, and online retailers can now quickly experiment with AI-powered tools, like automated customer service systems or personalized product recommendations, without the need for complex infrastructure. With these new tools, companies can create and launch AI projects in days instead of months. With Hugging Face’s new openai-gradio tool, developers can quickly create interactive web apps, like this one powered by the GPT-4-turbo model, allowing users to ask questions and receive AI-generated responses in real-time. (Credit: Hugging Face / Gradio) Customizing AI interfaces with just a few lines of code One of the standout features of openai-gradio is how easily developers can customize the interface for specific applications. By adding a few more lines of code, they can adjust everything from the input fields to the output format, tailoring the app for tasks such as answering customer queries or generating reports. For example, developers can modify the interface to include specific prompts and responses, adjusting everything from the input method to the format of the output. This could involve creating a chatbot that handles customer service questions or a data analysis tool that generates insights based on user inputs. Here’s an example provided by Gradio: gr.load( name=’gpt-4-turbo’, src=openai_gradio.registry, title=’OpenAI-Gradio Integration’, description=”Chat with GPT-4-turbo model.”, examples=[“Explain quantum gravity to a 5-year-old.”, “How many R’s are in the word Strawberry?”] ).launch() The flexibility of the tool allows for seamless integration into broader web-based projects or standalone applications. The package also integrates seamlessly into larger Gradio Web UIs, enabling the use of multiple models in a single application. Why this matters: Hugging Face’s growing influence in AI development Hugging Face’s latest release positions the company as a key player in the AI infrastructure space. By making it easier to integrate OpenAI’s models into real-world applications, Hugging Face is pushing the boundaries of what developers can achieve with minimal resources. This move also signals a broader trend toward AI-first development, where companies can iterate more quickly and deploy cutting-edge technology into production faster than ever before. The openai-gradio package is part of Hugging Face’s broader strategy to empower developers and disrupt the traditional AI model development cycle. As Kevin Weil, OpenAI’s Chief Product Officer, mentioned during the company’s recent DevDay, lowering the barriers to AI adoption is critical to accelerating its use across industries. Hugging Face’s package directly addresses this need by simplifying the development process while maintaining the power of OpenAI’s LLMs. Hugging Face’s openai-gradio package makes AI development as easy as writing a few lines of code. It opens the door for businesses to quickly build and deploy AI-powered web apps, leveling the playing field for startups and enterprises alike. The tool strips away much of the complexity that has traditionally slowed down AI adoption, offering a faster, more approachable way to harness the power of OpenAI’s language models. As more industries dive into AI, the need for scalable, cost-effective tools has never been greater. Hugging Face’s solution meets this need head-on, making it possible for developers to go from prototype to production in a fraction of the time. Whether you’re a small team testing the waters or a larger company scaling up, openai-gradio offers a practical, no-nonsense approach to getting AI into the hands of users. In a landscape where speed and agility are everything, if you’re not building with AI now, you’re already playing catch-up. source

Hugging Face’s new tool lets devs build AI-powered web apps with OpenAI in just minutes Read More »

Four Internets, book review: Possible internet futures, and how to reconcile them

Four Internets: Data, Geopolitics, and The Governance of Cyberspace • By Kieron O’Hara and Wendy Hall • Oxford University Press • 342 pages • ISBN: 978-0-19-752368-1 • £22.99   The early days of the internet were marked by cognitive dissonance expansive enough to include both the belief that the emerging social cyberspace could not be controlled by governments and the belief that it was constantly under threat of becoming fragmented.  Twenty-five years on, concerns about fragmentation — the ‘splinternet’ — continue, but most would admit that the Great Firewall of China, along with shutdowns in various countries during times of protest, has proved conclusively that a determined government can indeed exercise a great deal of control if it wants to.   Meanwhile, those who remember the internet’s beginnings wax nostalgic about the days when it was ‘open’, ‘free’, and ‘decentralised’ — qualities they hope to recapture via Web3 (which many argue is already highly centralised).  The big American technology companies dominate these discussions as much as they dominate most people’s daily online lives, as if the job would be complete after answering “What’s to be done about Facebook?”. The opposition in such public debates is generally the EU, which has done more to curb the power of big technology companies than any other authority.  In Four Internets: Data, Geopolitics, and The Governance of Cyberspace, University of Southampton academics Kieron O’Hara and Wendy Hall argue that this framing is too simple. Instead, as the title suggests, they take a broader international perspective to find four internet governance paradigms in play. These are: the open internet (which the authors connect with San Francisco); the ‘bourgeois Brussels’ internet that the EU is trying to regulate into being via legislation such as the Digital Services Act; the commercial (‘DC’) internet; and the paternalistic internet of countries like China, who want to control what their citizens can access.  You can quibble with these designations; the open internet needed many other locations for its creation besides San Francisco, but the libertarian Californian ideology dominated forward thinking in that period. And where I, as an American, see Big Tech as creatures of libertarian San Francisco, it’s in Washington DC that their vast lobbying funds are being spent. Without DC’s favourable policies, the commercial internet would not exist in its present form. O’Hara and Hall are, in other words, talking policy and ethos, not literally about who created which technologies or corporations.  Much of the book outlines the benefits and challenges deriving from each of these four approaches. Each provokes one or more policy questions for the authors to consider in the light of the four paradigms, and emerging technologies that may change the picture. A few examples: how to maintain quality in open systems; how to foster competition against the technology giants; whether a sovereign internet is possible; and when personal data should cross borders. None of these issues are easy to solve, and authors don’t pretend to do so.  “This is not a book about saving the world,” O’Hara and Hall write. Instead, it’s an attempt to provide the background and understanding to help the rest of us find workable compromises that take the best from each of these approaches. Compromise will be essential, because the authors’ four internets are not particularly compatible. RECENT AND RELATED CONTENT Brazil bans Telegram over unresponsiveness around tackling disinformation OMB’s Zero Trust strategy: Government gets good Cybersecurity 101: Protect your privacy from hackers, spies, and the government Chinese tech companies must undergo government cyber review to list overseas Chinese government declares all cryptocurrency transactions illegal Read more book reviews source

Four Internets, book review: Possible internet futures, and how to reconcile them Read More »

Big data could see bushfire and flood-prone homes priced out of insurance

People steer their boat by the old Windsor Bridge under rising floodwaters along the Hawkesbury River in the Windsor suburb of Sydney on 9 March 2022. Saeed Khan/AFP via Getty Images The use of big data by insurers in Australia could lead to homeowners living in at-risk areas being priced out of the insurance market, the Actuaries Institute has warned. The warning comes as parts of New South Wales and Queensland continue to face heavy floods, which has led to 21 deaths and billions of dollars in property damage so far. Actuaries Institute CEO Elayne Grace said the benefits of leveraging data from smart devices, aerial imagery, and raw text input has allowed the insurance industry to uncover a greater dispersion of risk, resulting in a more accurate estimate of risk. While this creates benefits for insurers and low-risk customers, it also leads to a greater range of premiums that result in insurance for homeowners living in areas prone to floods and bushfires becoming more expensive. “Some consumers are excluded from insurance: There will be a growing number of customers for whom insurance becomes less affordable and, as a consequence, they under-insure or do not insure at all. If the risk exceeds the risk appetite for all insurers, insurance will become unavailable,” Grace said at a digital economy conference conducted by the Reserve Bank of Australia and the Australian Bureau of Statistics. The Actuaries Institute also warned that the growing use of big data for insurance could potentially lead to an extreme set of outcomes, termed as a “vicious cycle”, where the insurance pool becomes smaller, less diversified, and higher risk, leading to increasingly higher premiums for remaining customers.     It said the Australian government may have a role to play when competitive insurance markets do not deliver adequate cover at an affordable price, specifically in cases where the underlying risk is beyond the consumer’s reasonable control and the insurance is essential. In flagging this concern, it has recommended that the government establish an expert group to discuss and develop a set of broad principles to protect consumers. On Wednesday, Productivity Commissioner Catherine de Fontenay said at the same conference that the measurable benefit behind adopting cloud in Australia is still unclear, except in circumstances where a company is operating in regional and remote Australia. The pattern was found as part of a study conducted by the Productivity Commission into whether cloud technology is associated with higher firm turnover per worker and higher wages per worker. Beyond this pattern, however, de Fontenay said there is currently no correlation between a company performing well and using cloud services when viewing companies’ turnover per employee and average wages data. Related coverage source

Big data could see bushfire and flood-prone homes priced out of insurance Read More »

AI pioneer Geoffrey Hinton, who warned of X-risk, wins Nobel Prize in Physics

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Geoffrey E. Hinton, a leading artificial intelligence researcher and professor emeritus at the University of Toronto, has been awarded the 2024 Nobel Prize in Physics alongside John J. Hopfield of Princeton University. The Royal Swedish Academy of Sciences has awarded both men the prize of 11 million Swedish kronor (approximately $1.06 million USD), to be shared equally between the laureates. Hinton has been nicknamed by various outlets and fellow researchers as the “Godfather of AI” due to his revolutionary work in artificial neural networks, a foundational technology underpinning modern artificial intelligence. Despite the recognition, Hinton has grown increasingly cautious about the future of AI. In 2023, he left his role then at Google’s DeepMind unit to speak more freely about the potential dangers posed by uncontrolled AI development. Hinton has warned that rapid advancements in AI could lead to unintended and harmful consequences, including misinformation, job displacement, and even existential threats — including human extinction, or so-called “x-risk.” He has expressed concern that the very technology he helped create may eventually surpass human intelligence in unpredictable ways, a scenario he finds particularly troubling. As MIT Tech Review reported after interviewing him in May 2023, Hinton was particularly concerned about bad actors, such as authoritarian leaders, who could use AI to manipulate elections, wage wars, or carry out immoral objectives. He expressed concern that AI systems, when tasked with achieving goals, may develop dangerous subgoals, like monopolizing energy resources or self-replication. While Hinton did not sign the high-profile letters calling for a moratorium on AI development, his departure from Google signaled a pivotal moment for the tech industry. Hinton believes that, without global regulation, AI systems could become uncontrollable, a sentiment echoed by many within the field. His vision for AI is now shaped by both its immense potential and the looming risks it carries. Even reflecting on his work today after winning the Nobel, Hinton told CNN that generative AI: “….will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us…we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.” What Hinton won the Nobel for Geoffrey Hinton’s recognition with the Nobel Prize comes as no surprise to those familiar with his extensive contributions to artificial intelligence. Born in London in 1947, Hinton initially pursued a PhD at the University of Edinburgh, where he embraced neural networks—an idea that was largely disregarded by most researchers at the time. In 1985, he and collaborator Terry Sejnowski created the “Boltzmann machine,” an algorithm, named for Austrian physicist Ludwig Boltzmann, capable of learning to identify elements in data. Joining the University of Toronto in 1987, Hinton worked with graduate students to further advance AI. Their work became central to the development of today’s machine learning systems, forming the basis for many of the applications we use today, including image recognition and natural language processing, self-driving cars, even language models like OpenAI’s GPT series. In 2012, Hinton an two of his graduate students from the University of Toronto, Ilya Sutskever and Alex Krizhevsky, founded a spinoff company called DNNresearch to focus on advancing deep neural networks—specifically “deep learning”—which models artificial intelligence on the human brain’s neural pathways to improve machine learning capabilities. Hinton and his collaborators developed a neural network capable of recognizing images (like flowers, dogs, and cars) with unprecedented accuracy, a feat that had long seemed unattainable. Their research fundamentally changed AI’s approach to computer vision, showcasing the immense potential of neural networks when trained on vast amounts of data. Despite its significant achievements, DNNresearch had no products or immediate commercial ambitions when it was founded. Instead, it was formed as a mechanism for Hinton and his students to more effectively navigate the growing interest in their work from major tech companies, which would eventually lead to the auction that sparked the modern race for AI dominance. In fact, they put the company up for auction in December 2012 and received a competitive bidding war between Google, Microsoft, Baidu, and DeepMind, as recounted in an amazing Wired magazine article by Cade Metz from 2021. Hinton eventually chose to sell to Google for $44 million, even though he could have driven the price higher. This auction marked the beginning of an AI arms race between tech giants, driving rapid advancements in deep learning and AI technology. This background is critical to understanding Hinton’s impact on AI and how his innovations contributed to his being awarded the Nobel Prize in Physics today, reflecting the foundational importance of his work in neural networks and machine learning to the evolution of modern AI. U of T President Meric Gertler congratulated Hinton on his accomplishment, highlighting the university’s pride in his historic achievement. Hopfield’s legacy John J. Hopfield, a professor at Princeton University who shares the Nobel Prize with Hinton, developed an associative memory model, known as the Hopfield network, which revolutionized how patterns, including images, can be stored and reconstructed. This model applies principles from physics, specifically atomic spin systems, to neural networks, enabling them to work through incomplete or distorted data to restore full patterns, and is similar to how diffusion models powering image and video AI services can learn to create new images from training on reconstructing old ones. His contributions have not only influenced AI but have also impacted computational neuroscience and error correction, showcasing the interdisciplinary relevance of his work. His work, closely related to atomic spin systems, paved the way for further advancements in AI, including Hinton’s Boltzmann machine. While Hinton’s work catapulted neural networks into the modern era, Hopfield’s earlier breakthroughs laid a crucial foundation for pattern recognition in neural models. Both laureates’ achievements have significantly influenced the rapid growth of AI, leading

AI pioneer Geoffrey Hinton, who warned of X-risk, wins Nobel Prize in Physics Read More »

Nvidia releases plugins to improve digital human realism on Unreal Engine 5

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia released its latest tech for creating AI-powered characters who look and behave like real humans. At Unreal Fest Seattle 2024, Nvidia released its new Unreal Engine 5 on-device plugins for Nvidia Ace, making it easier to build and deploy AI-powered MetaHuman characters on Windows PCs. Ace is a suite of digital human technologies that provide speech, intelligence and animation powered by generative AI. Developers can now access a new Audio2Face-3D plugin for AI-powered facial animations (where lips and faces move in sync with audio speech) in Autodesk Maya. This plugin gives developers a simple and streamlined interface to speed up and make easier avatar development in Maya. The plugin comes with source code so developers can dive in and develop a plugin for the digital content creation (DCC) tool of their choice. Lastly, we’ve built an Unreal Engine 5 renderer microservice that leverages Epic’s Unreal Pixel Streaming technology. This microservice now supports Nvidia Ace Animation Graph microservice and Linux operating system in early access. Animation Graph microservice enables realistic and responsive character movements and with Unreal Pixel Streaming support, developers can stream their MetaHuman creations to any device. Nvidia is making it easier to make MetaHumans with ACE. The Nvidia Ace Unreal Engine 5 sample project serves as a guide for developers looking to integrate digital humans into their games and applications. This sample project expands the number of on-device ACE plugins: Audio2Face-3D for lip sync and facial animation Nemotron-Mini 4B Instruct for response generation RAG for contextual information Nvidia said developers can build a database full of contextual lore for their intellectual property, generaterelevant responses at low latency and have those responses drive corresponding MetaHuman facial animations seamlessly in Unreal Engine 5. Each of these microservices were optimized to run on Windows PC with low latency and minimal memory footprint. Nvidia unveiled a series of tutorials for setting up and with the Unreal Engine 5 plugin. The new plugins are coming soon and to get started today, ensure devs have the appropriate Nvidia Ace plugin and Unreal Engine sample downloaded alongside a MetaHuman character. Autodesk Maya offers high-performance animation functions for game developers and technical artists to create high-quality 3D graphics. Now developers can generate high-quality, audio-driven facial animation easier for any character with the Audio2Face-3D plugin. The user interface has been streamlined and you can seamlessly transition to the Unreal Engine 5 environment. The source code and scripts are highly customizable and can be modified for use in other digital content creation tools. To get started on Maya, devs can get an API key or download the Audio 2Face-3D NIM. Nvidia NIM is a set of easy-to-use AI inference microservices that speed up the deployment of foundation models on anycloud or data center. Then ensure you have Autodesk Maya 2023, 2024 or 2025. Access the Maya ACE Github repository, which includes the Maya plugin, gRPC client libraries, test assets and a sample scene — everything you need to explore, learn and innovate with Audio2Face-3D. Developers deploying digital humans through the cloud are trying to simultaneously reach as many customers as possible, however streaming high fidelity characters requires significant compute resources. Today, the latest Unreal Engine 5 renderer microservice in Nvidia Ace adds support for the Nvidia Animation Graph Microservice and Linux operating system in early access. Animation Graph is a microservice that facilitates the creation of animation state machines and blend trees. It gives developers a flexible node-based system for animation blending, playback and control. The new Unreal Engine 5 renderer microservice with pixel streaming consumes data coming from the Animation Graph microservice, allowing developers to run their MetaHuman character on a server in the cloud and stream its rendered frames and audio to any browser and edge device over Web Real-Time Communication (WebRTC). Devs can apply for early access to download the Unreal Engine 5 renderer microservice today. You can more about Nvidia Ace and download the NIM microservices to begin building game characters powered by generative AI. Developers will be able to apply for early access to download the Unreal Engine 5 renderer microservice with support for the Animation Graph microservice and Linux OS. The Maya Ace plugin is available to download on GitHub.… source

Nvidia releases plugins to improve digital human realism on Unreal Engine 5 Read More »

Foremski's last word: When every company is a media company

My first post on ZDNet was in January 2006 in a column discussing “skinny applications.” In the 16 years since then, I’ve continued to try to be ahead of the curve in terms of spotting key trends. The most important trend I’ve written about, perhaps my biggest obsession, has been the disruption of the media industry business model using the technologies of the Internet.  I’ve long recognized that the emergence of the web is a powerful publishing technology; the development of what was called Web 2.0, coupled with the explosive use of blogging technologies,  added a powerful additional feature.  There is a double use -– the computer screen is both the publication and printing press — in that screen can also publish back. That made possible social media platforms, targeted advertising, and many other aspects of the modern online world as we currently experience it.  Advertisers no longer need to rely on newspapers, TV, or radio to reach potential customers. They can reach and interact with consumers directly, whether the customers like it or not.  The lost revenues have resulted in poor quality media and a danger to society. Although we recognize the need for media professionals, the gatekeepers, to keep things like hate speech and fake news in check — it’s something that today’s media companies like Facebook and Twitter either cannot or will not do. Today we understand the value of traditional media and how it moderated public discussions, keeping them civil and focused. We understand the value of newspapers in keeping fake news out, and the value of high-quality, verified news stories. So why cannot we capture that value and reward it? Why are scam sites filled with fake news getting rewarded? Why do we still have no solution?  With all our visionaries and tech geniuses. we still haven’t come up with a technology that can capture the value of high-quality media and reward it so that more can be produced?  I realized nearly 20 years ago that the demise of the media industry means that every company has to become a media company to some degree. Journalists used to visit a company and write about its achievements but there are few journalists left and they are overworked. Companies have been forced to try and produce their own stories about themselves.  However, companies are not well-suited to being media companies, and they find it hard to produce media content of every kind. Companies struggle with the editorial process. A single article produced by a company can involve weeks of meetings with stakeholders that can veto it at any point. This is a terribly inefficient and expensive way to produce media content. Companies now have access to incredibly powerful media technologies: for example, they can outfit a high-definition video and recording studio for very little cost including sophisticated software editing tools. But it is not enough. Companies still need to produce compelling content on a regular schedule and it has to compete in a  world of compelling content — which is why companies struggle to be media companies and generally do a very bad job. But companies must get better at being media companies because the disruption in the media industry continues without a solution in sight.  Companies will have to get better at producing and distributing their own media. Over the past two decades, I have been showing some of the largest tech companies — such as IBM, Intel, HP, SAP, Tibco, and Infineon– how to be media companies. Change happens slowly but it happens. “Every company is a media company” is one of the most important and disruptive trends in our global society.  It’s exciting to be in the middle of such a shift.  While this is my final IMHO post here on ZDNet, you can keep up with some of my work here at Silicon Valley Watcher – at the intersection of technology and media. And look out for Every Company is a Media Company:  EC=MC – a transformative equation for every business. source

Foremski's last word: When every company is a media company Read More »

Why Asian Immigrants Come to the U.S. and How They View Life Here

74% say they’d move to the U.S. again if they could, but a majority says the nation’s immigration system needs significant changes (All photos via Getty Images) The terms Asian, Asians living in the United States, U.S. Asian population and Asian Americans are used interchangeably throughout this report to refer to U.S. adults who self-identify as Asian, either alone or in combination with other races or Hispanic identity, unless otherwise noted. The term immigrants, when referring to survey respondents, includes those born outside the 50 U.S. States or the District of Columbia, Puerto Rico or other U.S. territories. When referring to Census Bureau data, this group includes those who were not U.S. citizens at birth – in other words, those born outside the 50 U.S. states or D.C., Puerto Rico, or other U.S. territories to parents who were not U.S. citizens. Immigrant and foreign born are used interchangeably throughout this report. The term U.S. born refers to people born in the 50 U.S. states, the District of Columbia, Puerto Rico or other U.S. territories. Ethnicity labels, such as Chinese or Filipino, are used in this report for findings for Asian immigrant ethnic groups, such as Chinese, Filipino, Indian, Korean or Vietnamese. For this report, ethnicity is not nationality or birthplace. For example, Chinese immigrants in this report are those self-identifying as of Chinese ethnicity, rather than necessarily being a current or former citizen of the People’s Republic of China. Ethnic groups in this report include those who self-identify as one Asian ethnicity only, either alone or in combination with a non-Asian race or ethnicity. Less populous Asian immigrant ethnic groups in this report are those who self-identify with ethnic groups that are not among the five largest Asian immigrant ethnic groups and identify with only one Asian ethnicity. These ethnic origin groups each represent about 3% or less of the Asian immigrant population in the U.S. For example, those who identify as Burmese, Hmong, Japanese or Pakistani among others are included in this category. These groups are not reportable on their own due to small sample sizes, but collectively they are reportable under this category. Country of origin is used in this report to refer to the places that respondents trace their roots or origin to. This may be influenced by ethnicity, birthplace, nationality, ancestry, or other social, cultural or political factors. This study asks several questions about respondents’ connection to and views of their country of origin. Subsequently, in different sections of this report, country of origin is used interchangeably with home country, country they came from, country where they were born, and country where their family or ancestors are from, depending on how the specific question was asked. For the exact question wording, refer to the topline. Throughout this report, the phrases Democrats and Democratic leaners and Democrats refer to respondents who identify politically with the Democratic Party or who are independent or identify with some other party but lean toward the Democratic Party. Similarly, the phrases Republicans and Republican leaners and Republicans both refer to respondents who identify politically with the Republican Party or are independent or identify with some other party but lean toward the Republican Party. Pew Research Center conducted this analysis to understand Asian immigrants’ experiences coming to and living in the United States. This report is part of the Center’s ongoing in-depth analysis of public opinion among Asian Americans. The data in this report comes from two main sources. The first data source is Pew Research Center’s survey of Asian American adults, conducted from July 5, 2022, to Jan. 27, 2023, in six languages among 7,006 respondents, including 5,036 Asian immigrants. For details, refer to the methodology. For questions used in this analysis, along with responses, refer to the topline. The second set of data sources are the U.S. Census Bureau’s decennial census data and American Community Surveys (ACS). Demographic analyses of Asian immigrants are based on full-count decennial censuses from 1860, 1870, 1880, 1900, 1910, 1920, 1930 and 1940; the 1% 1950 census; the 5% 1960 census; the 3% 1970 census (1% Form 1 state sample, 1% Form 1 metro sample, and 1% Form 1 neighborhood sample); the 5% 1980 census, 5% 1990 census, and 5% 2000 census; and the 2009, 2014, 2019 and 2022 ACS 5-year samples. Both decennial census data and ACS data were obtained through IPUMS USA. Analysis of census data for the immigrant or foreign-born population consists of people born outside the United States or its territories who are not U.S. citizens at birth. People born in the following places were defined as part of the U.S.-born population, provided these territories were recognized as U.S. territories at the time of respective surveys: Alaska (1870 and later); Hawaii, Puerto Rico, Guam and American Samoa (1900 and later); Philippines (1900-1940); Panama Canal Zone (1910-1970); U.S. Virgin Islands (1920 and later); Trust Territory of the Pacific (1950-1980); and Northern Mariana Islands (1950 and later). Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. The Center’s Asian American portfolio was funded by The Pew Charitable Trusts, with generous support from The Asian American Foundation; Chan Zuckerberg Initiative DAF, an advised fund of the Silicon Valley Community Foundation; the Robert Wood Johnson Foundation; the Henry Luce Foundation; the Doris Duke Foundation; The Wallace H. Coulter Foundation; The Dirk and Charlene Kabcenell Foundation; The Long Family Foundation; Lu-Hebert Fund; Gee Family Foundation; Joseph Cotchett; the Julian Abdey and Sabrina Moyle Charitable Fund; and Nanci Nishimura. We would also like to thank the Leaders Forum for its thought leadership and valuable assistance in helping make this survey possible. The strategic communications campaign used to promote the research was made possible with generous support from the Doris Duke Foundation. Asian Americans are the only major racial or ethnic group in the United States that is majority immigrant. Some 54% of the 24 million Asian Americans living in the U.S. are immigrants; among Asian adults, that share rises to 67%. Asian

Why Asian Immigrants Come to the U.S. and How They View Life Here Read More »

Predictions: Where will storage be in 40 years?

Editor’s note: Retiring after 40 years in the computer industry and 14 years with ZDNet will leave Robin Harris with more time for other adventures, as seen above. Robin: Your wit, intelligence and expertise will be sorely missed. [Photo credit: Robin Harris] Robin Harris In 1981, a 5MHz 32 bit supermini CPU cost $150,000, hard drives cost $50/MB, and IBM’s SNA network was dominant in many enterprises. FORTRAN and COBOL ruled technical and business computing. A PC meant either Radio Shack, Commodore, or Apple, and none were common in business. The next five years things changed all that. The IBM PC legitimized microcomputers. Seagate shipped a 5MB 5.25-inch hard drive. Intel, DEC, and Xerox introduced Ethernet. Visicalc drove PC adoption across enterprises. And boffins were using a fledgling ARPAnet, often running a rickety OS called UNIX.  I never would have predicted that 40 years later I’d be wearing more power on my wrist than you could buy then, with a better display and a wider range of applications. Or that my $1,000 notebook would have more storage than all but the largest enterprises. But that won’t stop me from trying. Here’s what I look forward to, 40 years from now.  Predictions Storage will continue to evolve at an accelerating pace. Non-volatile memory — the 21st century’s answer to magnetic core memory — is especially exciting. Several new storage technologies will reach market, each faster, more resilient, but not necessarily cheaper, than what we have today.  Bio-based and crystal-based storage will be battling it out. Bio will be denser, but crystalline storage will be faster and longer-lived, and each will find a niche. Persistence, the raison d’être of storage, will emerge as a key problem. Our entire digital civilization is based on magnetic media and quantum wells with a 5-10 year lifespan. A few EMPs in the right places — or one massive solar flare — and the last several decades of knowledge and records will be lost forever. We must do better but we won’t until a major disaster strikes first.  The virtualization of human interaction, turbocharged by more pandemics, will continue apace. That means the rapid growth of hyper-scale cloud computing won’t be able to keep pace with the data generated by mobile devices at the edge, especially as prices continue to drop, third-world incomes rise, and another 4 billion people come online.  Gathering, analyzing, reducing and monetizing edge data will be a major industry, enabling us to manage, I hope, the logistics of a much more turbulent world of 10 billion people. Social networks will be recognized and regulated as public utilities. Letting a few hundred malicious actors derail and pollute public discourse is unacceptable, no matter how profitable a few billionaires and their sycophants find it.  The Covid-driven rush to rural areas won’t last. Urbanization has been a secular trend for 500 years because cities are wealth-creation machines. Back in the 1840s living in a city cut a decade off your lifespan, and people still flocked to them. We’ll get past Covid. Turning the page I’m looking 40 years ahead because, as of September 1, 2021, I’m retiring from the computer industry and more than 14 years at ZDNet. Joining the industry began a major new chapter in my life.  Now it’s time for another chapter. Literally.  I intend to finish the historical novel I’ve been working on for the last six years. I’ve put a draft of the first chapter online here. Warning: if you need American history sugar coated, this isn’t for you. As excited as I am about this next chapter in my life and work, I’ve also engaged in some anticipatory nostalgia. I’ll miss people I’ve met at CES, NAB, the Usenix FAST conference, and the excellent Non-volatile Memory Workshop at UC San Diego, and now, the people I will never meet — like most of the rest of the excellent ZDNet crew. As I turn the page on this chapter of my life, I’m looking forward to continuing to learn and grow as best I can. May it ever be so for you as well. Comments welcome. What turning points are you looking forward to? source

Predictions: Where will storage be in 40 years? Read More »