Hugging Face’s new tool lets devs build AI-powered web apps with OpenAI in just minutes

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Hugging Face has released an innovative new Python package that allows developers to create AI-powered web apps with just a few lines of code. The tool, called “OpenAI-Gradio,” simplifies the process of integrating OpenAI’s large language models (LLMs) into web applications, making AI more accessible to developers of all skill levels. The release signals a major shift in how companies can leverage AI, reducing development time while maintaining powerful, scalable applications. How developers can create web apps in minutes with OpenAI-Gradio The OpenAI-Gradio package integrates OpenAI’s API with Gradio, a popular interface tool for machine learning (ML) applications. In just a few steps, developers can install the package, set their OpenAI API key, and launch a fully functional web app. The simplicity of this setup allows even smaller teams with limited resources to deploy advanced AI models quickly. For instance, after installing the package with: pip install openai-gradio A developer can write: import gradio as gr import openai_gradio gr.load( name=’gpt-4-turbo’, src=openai_gradio.registry, ).launch() This small amount of code spins up a Gradio interface connected to OpenAI’s GPT-4-turbo model, allowing users to interact with state-of-the-art AI directly from a web app. Developers can also customize the interface further, adding specific input and output configurations or even embedding the app into larger projects. Simplifying AI development for businesses of all sizes Hugging Face’s openai-gradio package removes traditional barriers to AI development, such as managing complex backend infrastructure or dealing with model hosting. By abstracting these challenges, the package enables businesses of all sizes to build and deploy AI-powered applications without needing large engineering teams or significant cloud infrastructure. This shift makes AI development more accessible to a much wider range of businesses. Small and mid-sized companies, startups, and online retailers can now quickly experiment with AI-powered tools, like automated customer service systems or personalized product recommendations, without the need for complex infrastructure. With these new tools, companies can create and launch AI projects in days instead of months. With Hugging Face’s new openai-gradio tool, developers can quickly create interactive web apps, like this one powered by the GPT-4-turbo model, allowing users to ask questions and receive AI-generated responses in real-time. (Credit: Hugging Face / Gradio) Customizing AI interfaces with just a few lines of code One of the standout features of openai-gradio is how easily developers can customize the interface for specific applications. By adding a few more lines of code, they can adjust everything from the input fields to the output format, tailoring the app for tasks such as answering customer queries or generating reports. For example, developers can modify the interface to include specific prompts and responses, adjusting everything from the input method to the format of the output. This could involve creating a chatbot that handles customer service questions or a data analysis tool that generates insights based on user inputs. Here’s an example provided by Gradio: gr.load( name=’gpt-4-turbo’, src=openai_gradio.registry, title=’OpenAI-Gradio Integration’, description=”Chat with GPT-4-turbo model.”, examples=[“Explain quantum gravity to a 5-year-old.”, “How many R’s are in the word Strawberry?”] ).launch() The flexibility of the tool allows for seamless integration into broader web-based projects or standalone applications. The package also integrates seamlessly into larger Gradio Web UIs, enabling the use of multiple models in a single application. Why this matters: Hugging Face’s growing influence in AI development Hugging Face’s latest release positions the company as a key player in the AI infrastructure space. By making it easier to integrate OpenAI’s models into real-world applications, Hugging Face is pushing the boundaries of what developers can achieve with minimal resources. This move also signals a broader trend toward AI-first development, where companies can iterate more quickly and deploy cutting-edge technology into production faster than ever before. The openai-gradio package is part of Hugging Face’s broader strategy to empower developers and disrupt the traditional AI model development cycle. As Kevin Weil, OpenAI’s Chief Product Officer, mentioned during the company’s recent DevDay, lowering the barriers to AI adoption is critical to accelerating its use across industries. Hugging Face’s package directly addresses this need by simplifying the development process while maintaining the power of OpenAI’s LLMs. Hugging Face’s openai-gradio package makes AI development as easy as writing a few lines of code. It opens the door for businesses to quickly build and deploy AI-powered web apps, leveling the playing field for startups and enterprises alike. The tool strips away much of the complexity that has traditionally slowed down AI adoption, offering a faster, more approachable way to harness the power of OpenAI’s language models. As more industries dive into AI, the need for scalable, cost-effective tools has never been greater. Hugging Face’s solution meets this need head-on, making it possible for developers to go from prototype to production in a fraction of the time. Whether you’re a small team testing the waters or a larger company scaling up, openai-gradio offers a practical, no-nonsense approach to getting AI into the hands of users. In a landscape where speed and agility are everything, if you’re not building with AI now, you’re already playing catch-up. source

Hugging Face’s new tool lets devs build AI-powered web apps with OpenAI in just minutes Read More »

Four Internets, book review: Possible internet futures, and how to reconcile them

Four Internets: Data, Geopolitics, and The Governance of Cyberspace • By Kieron O’Hara and Wendy Hall • Oxford University Press • 342 pages • ISBN: 978-0-19-752368-1 • £22.99   The early days of the internet were marked by cognitive dissonance expansive enough to include both the belief that the emerging social cyberspace could not be controlled by governments and the belief that it was constantly under threat of becoming fragmented.  Twenty-five years on, concerns about fragmentation — the ‘splinternet’ — continue, but most would admit that the Great Firewall of China, along with shutdowns in various countries during times of protest, has proved conclusively that a determined government can indeed exercise a great deal of control if it wants to.   Meanwhile, those who remember the internet’s beginnings wax nostalgic about the days when it was ‘open’, ‘free’, and ‘decentralised’ — qualities they hope to recapture via Web3 (which many argue is already highly centralised).  The big American technology companies dominate these discussions as much as they dominate most people’s daily online lives, as if the job would be complete after answering “What’s to be done about Facebook?”. The opposition in such public debates is generally the EU, which has done more to curb the power of big technology companies than any other authority.  In Four Internets: Data, Geopolitics, and The Governance of Cyberspace, University of Southampton academics Kieron O’Hara and Wendy Hall argue that this framing is too simple. Instead, as the title suggests, they take a broader international perspective to find four internet governance paradigms in play. These are: the open internet (which the authors connect with San Francisco); the ‘bourgeois Brussels’ internet that the EU is trying to regulate into being via legislation such as the Digital Services Act; the commercial (‘DC’) internet; and the paternalistic internet of countries like China, who want to control what their citizens can access.  You can quibble with these designations; the open internet needed many other locations for its creation besides San Francisco, but the libertarian Californian ideology dominated forward thinking in that period. And where I, as an American, see Big Tech as creatures of libertarian San Francisco, it’s in Washington DC that their vast lobbying funds are being spent. Without DC’s favourable policies, the commercial internet would not exist in its present form. O’Hara and Hall are, in other words, talking policy and ethos, not literally about who created which technologies or corporations.  Much of the book outlines the benefits and challenges deriving from each of these four approaches. Each provokes one or more policy questions for the authors to consider in the light of the four paradigms, and emerging technologies that may change the picture. A few examples: how to maintain quality in open systems; how to foster competition against the technology giants; whether a sovereign internet is possible; and when personal data should cross borders. None of these issues are easy to solve, and authors don’t pretend to do so.  “This is not a book about saving the world,” O’Hara and Hall write. Instead, it’s an attempt to provide the background and understanding to help the rest of us find workable compromises that take the best from each of these approaches. Compromise will be essential, because the authors’ four internets are not particularly compatible. RECENT AND RELATED CONTENT Brazil bans Telegram over unresponsiveness around tackling disinformation OMB’s Zero Trust strategy: Government gets good Cybersecurity 101: Protect your privacy from hackers, spies, and the government Chinese tech companies must undergo government cyber review to list overseas Chinese government declares all cryptocurrency transactions illegal Read more book reviews source

Four Internets, book review: Possible internet futures, and how to reconcile them Read More »

Big data could see bushfire and flood-prone homes priced out of insurance

People steer their boat by the old Windsor Bridge under rising floodwaters along the Hawkesbury River in the Windsor suburb of Sydney on 9 March 2022. Saeed Khan/AFP via Getty Images The use of big data by insurers in Australia could lead to homeowners living in at-risk areas being priced out of the insurance market, the Actuaries Institute has warned. The warning comes as parts of New South Wales and Queensland continue to face heavy floods, which has led to 21 deaths and billions of dollars in property damage so far. Actuaries Institute CEO Elayne Grace said the benefits of leveraging data from smart devices, aerial imagery, and raw text input has allowed the insurance industry to uncover a greater dispersion of risk, resulting in a more accurate estimate of risk. While this creates benefits for insurers and low-risk customers, it also leads to a greater range of premiums that result in insurance for homeowners living in areas prone to floods and bushfires becoming more expensive. “Some consumers are excluded from insurance: There will be a growing number of customers for whom insurance becomes less affordable and, as a consequence, they under-insure or do not insure at all. If the risk exceeds the risk appetite for all insurers, insurance will become unavailable,” Grace said at a digital economy conference conducted by the Reserve Bank of Australia and the Australian Bureau of Statistics. The Actuaries Institute also warned that the growing use of big data for insurance could potentially lead to an extreme set of outcomes, termed as a “vicious cycle”, where the insurance pool becomes smaller, less diversified, and higher risk, leading to increasingly higher premiums for remaining customers.     It said the Australian government may have a role to play when competitive insurance markets do not deliver adequate cover at an affordable price, specifically in cases where the underlying risk is beyond the consumer’s reasonable control and the insurance is essential. In flagging this concern, it has recommended that the government establish an expert group to discuss and develop a set of broad principles to protect consumers. On Wednesday, Productivity Commissioner Catherine de Fontenay said at the same conference that the measurable benefit behind adopting cloud in Australia is still unclear, except in circumstances where a company is operating in regional and remote Australia. The pattern was found as part of a study conducted by the Productivity Commission into whether cloud technology is associated with higher firm turnover per worker and higher wages per worker. Beyond this pattern, however, de Fontenay said there is currently no correlation between a company performing well and using cloud services when viewing companies’ turnover per employee and average wages data. Related coverage source

Big data could see bushfire and flood-prone homes priced out of insurance Read More »

AI pioneer Geoffrey Hinton, who warned of X-risk, wins Nobel Prize in Physics

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Geoffrey E. Hinton, a leading artificial intelligence researcher and professor emeritus at the University of Toronto, has been awarded the 2024 Nobel Prize in Physics alongside John J. Hopfield of Princeton University. The Royal Swedish Academy of Sciences has awarded both men the prize of 11 million Swedish kronor (approximately $1.06 million USD), to be shared equally between the laureates. Hinton has been nicknamed by various outlets and fellow researchers as the “Godfather of AI” due to his revolutionary work in artificial neural networks, a foundational technology underpinning modern artificial intelligence. Despite the recognition, Hinton has grown increasingly cautious about the future of AI. In 2023, he left his role then at Google’s DeepMind unit to speak more freely about the potential dangers posed by uncontrolled AI development. Hinton has warned that rapid advancements in AI could lead to unintended and harmful consequences, including misinformation, job displacement, and even existential threats — including human extinction, or so-called “x-risk.” He has expressed concern that the very technology he helped create may eventually surpass human intelligence in unpredictable ways, a scenario he finds particularly troubling. As MIT Tech Review reported after interviewing him in May 2023, Hinton was particularly concerned about bad actors, such as authoritarian leaders, who could use AI to manipulate elections, wage wars, or carry out immoral objectives. He expressed concern that AI systems, when tasked with achieving goals, may develop dangerous subgoals, like monopolizing energy resources or self-replication. While Hinton did not sign the high-profile letters calling for a moratorium on AI development, his departure from Google signaled a pivotal moment for the tech industry. Hinton believes that, without global regulation, AI systems could become uncontrollable, a sentiment echoed by many within the field. His vision for AI is now shaped by both its immense potential and the looming risks it carries. Even reflecting on his work today after winning the Nobel, Hinton told CNN that generative AI: “….will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us…we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.” What Hinton won the Nobel for Geoffrey Hinton’s recognition with the Nobel Prize comes as no surprise to those familiar with his extensive contributions to artificial intelligence. Born in London in 1947, Hinton initially pursued a PhD at the University of Edinburgh, where he embraced neural networks—an idea that was largely disregarded by most researchers at the time. In 1985, he and collaborator Terry Sejnowski created the “Boltzmann machine,” an algorithm, named for Austrian physicist Ludwig Boltzmann, capable of learning to identify elements in data. Joining the University of Toronto in 1987, Hinton worked with graduate students to further advance AI. Their work became central to the development of today’s machine learning systems, forming the basis for many of the applications we use today, including image recognition and natural language processing, self-driving cars, even language models like OpenAI’s GPT series. In 2012, Hinton an two of his graduate students from the University of Toronto, Ilya Sutskever and Alex Krizhevsky, founded a spinoff company called DNNresearch to focus on advancing deep neural networks—specifically “deep learning”—which models artificial intelligence on the human brain’s neural pathways to improve machine learning capabilities. Hinton and his collaborators developed a neural network capable of recognizing images (like flowers, dogs, and cars) with unprecedented accuracy, a feat that had long seemed unattainable. Their research fundamentally changed AI’s approach to computer vision, showcasing the immense potential of neural networks when trained on vast amounts of data. Despite its significant achievements, DNNresearch had no products or immediate commercial ambitions when it was founded. Instead, it was formed as a mechanism for Hinton and his students to more effectively navigate the growing interest in their work from major tech companies, which would eventually lead to the auction that sparked the modern race for AI dominance. In fact, they put the company up for auction in December 2012 and received a competitive bidding war between Google, Microsoft, Baidu, and DeepMind, as recounted in an amazing Wired magazine article by Cade Metz from 2021. Hinton eventually chose to sell to Google for $44 million, even though he could have driven the price higher. This auction marked the beginning of an AI arms race between tech giants, driving rapid advancements in deep learning and AI technology. This background is critical to understanding Hinton’s impact on AI and how his innovations contributed to his being awarded the Nobel Prize in Physics today, reflecting the foundational importance of his work in neural networks and machine learning to the evolution of modern AI. U of T President Meric Gertler congratulated Hinton on his accomplishment, highlighting the university’s pride in his historic achievement. Hopfield’s legacy John J. Hopfield, a professor at Princeton University who shares the Nobel Prize with Hinton, developed an associative memory model, known as the Hopfield network, which revolutionized how patterns, including images, can be stored and reconstructed. This model applies principles from physics, specifically atomic spin systems, to neural networks, enabling them to work through incomplete or distorted data to restore full patterns, and is similar to how diffusion models powering image and video AI services can learn to create new images from training on reconstructing old ones. His contributions have not only influenced AI but have also impacted computational neuroscience and error correction, showcasing the interdisciplinary relevance of his work. His work, closely related to atomic spin systems, paved the way for further advancements in AI, including Hinton’s Boltzmann machine. While Hinton’s work catapulted neural networks into the modern era, Hopfield’s earlier breakthroughs laid a crucial foundation for pattern recognition in neural models. Both laureates’ achievements have significantly influenced the rapid growth of AI, leading

AI pioneer Geoffrey Hinton, who warned of X-risk, wins Nobel Prize in Physics Read More »

Nvidia releases plugins to improve digital human realism on Unreal Engine 5

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia released its latest tech for creating AI-powered characters who look and behave like real humans. At Unreal Fest Seattle 2024, Nvidia released its new Unreal Engine 5 on-device plugins for Nvidia Ace, making it easier to build and deploy AI-powered MetaHuman characters on Windows PCs. Ace is a suite of digital human technologies that provide speech, intelligence and animation powered by generative AI. Developers can now access a new Audio2Face-3D plugin for AI-powered facial animations (where lips and faces move in sync with audio speech) in Autodesk Maya. This plugin gives developers a simple and streamlined interface to speed up and make easier avatar development in Maya. The plugin comes with source code so developers can dive in and develop a plugin for the digital content creation (DCC) tool of their choice. Lastly, we’ve built an Unreal Engine 5 renderer microservice that leverages Epic’s Unreal Pixel Streaming technology. This microservice now supports Nvidia Ace Animation Graph microservice and Linux operating system in early access. Animation Graph microservice enables realistic and responsive character movements and with Unreal Pixel Streaming support, developers can stream their MetaHuman creations to any device. Nvidia is making it easier to make MetaHumans with ACE. The Nvidia Ace Unreal Engine 5 sample project serves as a guide for developers looking to integrate digital humans into their games and applications. This sample project expands the number of on-device ACE plugins: Audio2Face-3D for lip sync and facial animation Nemotron-Mini 4B Instruct for response generation RAG for contextual information Nvidia said developers can build a database full of contextual lore for their intellectual property, generaterelevant responses at low latency and have those responses drive corresponding MetaHuman facial animations seamlessly in Unreal Engine 5. Each of these microservices were optimized to run on Windows PC with low latency and minimal memory footprint. Nvidia unveiled a series of tutorials for setting up and with the Unreal Engine 5 plugin. The new plugins are coming soon and to get started today, ensure devs have the appropriate Nvidia Ace plugin and Unreal Engine sample downloaded alongside a MetaHuman character. Autodesk Maya offers high-performance animation functions for game developers and technical artists to create high-quality 3D graphics. Now developers can generate high-quality, audio-driven facial animation easier for any character with the Audio2Face-3D plugin. The user interface has been streamlined and you can seamlessly transition to the Unreal Engine 5 environment. The source code and scripts are highly customizable and can be modified for use in other digital content creation tools. To get started on Maya, devs can get an API key or download the Audio 2Face-3D NIM. Nvidia NIM is a set of easy-to-use AI inference microservices that speed up the deployment of foundation models on anycloud or data center. Then ensure you have Autodesk Maya 2023, 2024 or 2025. Access the Maya ACE Github repository, which includes the Maya plugin, gRPC client libraries, test assets and a sample scene — everything you need to explore, learn and innovate with Audio2Face-3D. Developers deploying digital humans through the cloud are trying to simultaneously reach as many customers as possible, however streaming high fidelity characters requires significant compute resources. Today, the latest Unreal Engine 5 renderer microservice in Nvidia Ace adds support for the Nvidia Animation Graph Microservice and Linux operating system in early access. Animation Graph is a microservice that facilitates the creation of animation state machines and blend trees. It gives developers a flexible node-based system for animation blending, playback and control. The new Unreal Engine 5 renderer microservice with pixel streaming consumes data coming from the Animation Graph microservice, allowing developers to run their MetaHuman character on a server in the cloud and stream its rendered frames and audio to any browser and edge device over Web Real-Time Communication (WebRTC). Devs can apply for early access to download the Unreal Engine 5 renderer microservice today. You can more about Nvidia Ace and download the NIM microservices to begin building game characters powered by generative AI. Developers will be able to apply for early access to download the Unreal Engine 5 renderer microservice with support for the Animation Graph microservice and Linux OS. The Maya Ace plugin is available to download on GitHub.… source

Nvidia releases plugins to improve digital human realism on Unreal Engine 5 Read More »

Foremski's last word: When every company is a media company

My first post on ZDNet was in January 2006 in a column discussing “skinny applications.” In the 16 years since then, I’ve continued to try to be ahead of the curve in terms of spotting key trends. The most important trend I’ve written about, perhaps my biggest obsession, has been the disruption of the media industry business model using the technologies of the Internet.  I’ve long recognized that the emergence of the web is a powerful publishing technology; the development of what was called Web 2.0, coupled with the explosive use of blogging technologies,  added a powerful additional feature.  There is a double use -– the computer screen is both the publication and printing press — in that screen can also publish back. That made possible social media platforms, targeted advertising, and many other aspects of the modern online world as we currently experience it.  Advertisers no longer need to rely on newspapers, TV, or radio to reach potential customers. They can reach and interact with consumers directly, whether the customers like it or not.  The lost revenues have resulted in poor quality media and a danger to society. Although we recognize the need for media professionals, the gatekeepers, to keep things like hate speech and fake news in check — it’s something that today’s media companies like Facebook and Twitter either cannot or will not do. Today we understand the value of traditional media and how it moderated public discussions, keeping them civil and focused. We understand the value of newspapers in keeping fake news out, and the value of high-quality, verified news stories. So why cannot we capture that value and reward it? Why are scam sites filled with fake news getting rewarded? Why do we still have no solution?  With all our visionaries and tech geniuses. we still haven’t come up with a technology that can capture the value of high-quality media and reward it so that more can be produced?  I realized nearly 20 years ago that the demise of the media industry means that every company has to become a media company to some degree. Journalists used to visit a company and write about its achievements but there are few journalists left and they are overworked. Companies have been forced to try and produce their own stories about themselves.  However, companies are not well-suited to being media companies, and they find it hard to produce media content of every kind. Companies struggle with the editorial process. A single article produced by a company can involve weeks of meetings with stakeholders that can veto it at any point. This is a terribly inefficient and expensive way to produce media content. Companies now have access to incredibly powerful media technologies: for example, they can outfit a high-definition video and recording studio for very little cost including sophisticated software editing tools. But it is not enough. Companies still need to produce compelling content on a regular schedule and it has to compete in a  world of compelling content — which is why companies struggle to be media companies and generally do a very bad job. But companies must get better at being media companies because the disruption in the media industry continues without a solution in sight.  Companies will have to get better at producing and distributing their own media. Over the past two decades, I have been showing some of the largest tech companies — such as IBM, Intel, HP, SAP, Tibco, and Infineon– how to be media companies. Change happens slowly but it happens. “Every company is a media company” is one of the most important and disruptive trends in our global society.  It’s exciting to be in the middle of such a shift.  While this is my final IMHO post here on ZDNet, you can keep up with some of my work here at Silicon Valley Watcher – at the intersection of technology and media. And look out for Every Company is a Media Company:  EC=MC – a transformative equation for every business. source

Foremski's last word: When every company is a media company Read More »

Why Asian Immigrants Come to the U.S. and How They View Life Here

74% say they’d move to the U.S. again if they could, but a majority says the nation’s immigration system needs significant changes (All photos via Getty Images) The terms Asian, Asians living in the United States, U.S. Asian population and Asian Americans are used interchangeably throughout this report to refer to U.S. adults who self-identify as Asian, either alone or in combination with other races or Hispanic identity, unless otherwise noted. The term immigrants, when referring to survey respondents, includes those born outside the 50 U.S. States or the District of Columbia, Puerto Rico or other U.S. territories. When referring to Census Bureau data, this group includes those who were not U.S. citizens at birth – in other words, those born outside the 50 U.S. states or D.C., Puerto Rico, or other U.S. territories to parents who were not U.S. citizens. Immigrant and foreign born are used interchangeably throughout this report. The term U.S. born refers to people born in the 50 U.S. states, the District of Columbia, Puerto Rico or other U.S. territories. Ethnicity labels, such as Chinese or Filipino, are used in this report for findings for Asian immigrant ethnic groups, such as Chinese, Filipino, Indian, Korean or Vietnamese. For this report, ethnicity is not nationality or birthplace. For example, Chinese immigrants in this report are those self-identifying as of Chinese ethnicity, rather than necessarily being a current or former citizen of the People’s Republic of China. Ethnic groups in this report include those who self-identify as one Asian ethnicity only, either alone or in combination with a non-Asian race or ethnicity. Less populous Asian immigrant ethnic groups in this report are those who self-identify with ethnic groups that are not among the five largest Asian immigrant ethnic groups and identify with only one Asian ethnicity. These ethnic origin groups each represent about 3% or less of the Asian immigrant population in the U.S. For example, those who identify as Burmese, Hmong, Japanese or Pakistani among others are included in this category. These groups are not reportable on their own due to small sample sizes, but collectively they are reportable under this category. Country of origin is used in this report to refer to the places that respondents trace their roots or origin to. This may be influenced by ethnicity, birthplace, nationality, ancestry, or other social, cultural or political factors. This study asks several questions about respondents’ connection to and views of their country of origin. Subsequently, in different sections of this report, country of origin is used interchangeably with home country, country they came from, country where they were born, and country where their family or ancestors are from, depending on how the specific question was asked. For the exact question wording, refer to the topline. Throughout this report, the phrases Democrats and Democratic leaners and Democrats refer to respondents who identify politically with the Democratic Party or who are independent or identify with some other party but lean toward the Democratic Party. Similarly, the phrases Republicans and Republican leaners and Republicans both refer to respondents who identify politically with the Republican Party or are independent or identify with some other party but lean toward the Republican Party. Pew Research Center conducted this analysis to understand Asian immigrants’ experiences coming to and living in the United States. This report is part of the Center’s ongoing in-depth analysis of public opinion among Asian Americans. The data in this report comes from two main sources. The first data source is Pew Research Center’s survey of Asian American adults, conducted from July 5, 2022, to Jan. 27, 2023, in six languages among 7,006 respondents, including 5,036 Asian immigrants. For details, refer to the methodology. For questions used in this analysis, along with responses, refer to the topline. The second set of data sources are the U.S. Census Bureau’s decennial census data and American Community Surveys (ACS). Demographic analyses of Asian immigrants are based on full-count decennial censuses from 1860, 1870, 1880, 1900, 1910, 1920, 1930 and 1940; the 1% 1950 census; the 5% 1960 census; the 3% 1970 census (1% Form 1 state sample, 1% Form 1 metro sample, and 1% Form 1 neighborhood sample); the 5% 1980 census, 5% 1990 census, and 5% 2000 census; and the 2009, 2014, 2019 and 2022 ACS 5-year samples. Both decennial census data and ACS data were obtained through IPUMS USA. Analysis of census data for the immigrant or foreign-born population consists of people born outside the United States or its territories who are not U.S. citizens at birth. People born in the following places were defined as part of the U.S.-born population, provided these territories were recognized as U.S. territories at the time of respective surveys: Alaska (1870 and later); Hawaii, Puerto Rico, Guam and American Samoa (1900 and later); Philippines (1900-1940); Panama Canal Zone (1910-1970); U.S. Virgin Islands (1920 and later); Trust Territory of the Pacific (1950-1980); and Northern Mariana Islands (1950 and later). Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. The Center’s Asian American portfolio was funded by The Pew Charitable Trusts, with generous support from The Asian American Foundation; Chan Zuckerberg Initiative DAF, an advised fund of the Silicon Valley Community Foundation; the Robert Wood Johnson Foundation; the Henry Luce Foundation; the Doris Duke Foundation; The Wallace H. Coulter Foundation; The Dirk and Charlene Kabcenell Foundation; The Long Family Foundation; Lu-Hebert Fund; Gee Family Foundation; Joseph Cotchett; the Julian Abdey and Sabrina Moyle Charitable Fund; and Nanci Nishimura. We would also like to thank the Leaders Forum for its thought leadership and valuable assistance in helping make this survey possible. The strategic communications campaign used to promote the research was made possible with generous support from the Doris Duke Foundation. Asian Americans are the only major racial or ethnic group in the United States that is majority immigrant. Some 54% of the 24 million Asian Americans living in the U.S. are immigrants; among Asian adults, that share rises to 67%. Asian

Why Asian Immigrants Come to the U.S. and How They View Life Here Read More »

Predictions: Where will storage be in 40 years?

Editor’s note: Retiring after 40 years in the computer industry and 14 years with ZDNet will leave Robin Harris with more time for other adventures, as seen above. Robin: Your wit, intelligence and expertise will be sorely missed. [Photo credit: Robin Harris] Robin Harris In 1981, a 5MHz 32 bit supermini CPU cost $150,000, hard drives cost $50/MB, and IBM’s SNA network was dominant in many enterprises. FORTRAN and COBOL ruled technical and business computing. A PC meant either Radio Shack, Commodore, or Apple, and none were common in business. The next five years things changed all that. The IBM PC legitimized microcomputers. Seagate shipped a 5MB 5.25-inch hard drive. Intel, DEC, and Xerox introduced Ethernet. Visicalc drove PC adoption across enterprises. And boffins were using a fledgling ARPAnet, often running a rickety OS called UNIX.  I never would have predicted that 40 years later I’d be wearing more power on my wrist than you could buy then, with a better display and a wider range of applications. Or that my $1,000 notebook would have more storage than all but the largest enterprises. But that won’t stop me from trying. Here’s what I look forward to, 40 years from now.  Predictions Storage will continue to evolve at an accelerating pace. Non-volatile memory — the 21st century’s answer to magnetic core memory — is especially exciting. Several new storage technologies will reach market, each faster, more resilient, but not necessarily cheaper, than what we have today.  Bio-based and crystal-based storage will be battling it out. Bio will be denser, but crystalline storage will be faster and longer-lived, and each will find a niche. Persistence, the raison d’être of storage, will emerge as a key problem. Our entire digital civilization is based on magnetic media and quantum wells with a 5-10 year lifespan. A few EMPs in the right places — or one massive solar flare — and the last several decades of knowledge and records will be lost forever. We must do better but we won’t until a major disaster strikes first.  The virtualization of human interaction, turbocharged by more pandemics, will continue apace. That means the rapid growth of hyper-scale cloud computing won’t be able to keep pace with the data generated by mobile devices at the edge, especially as prices continue to drop, third-world incomes rise, and another 4 billion people come online.  Gathering, analyzing, reducing and monetizing edge data will be a major industry, enabling us to manage, I hope, the logistics of a much more turbulent world of 10 billion people. Social networks will be recognized and regulated as public utilities. Letting a few hundred malicious actors derail and pollute public discourse is unacceptable, no matter how profitable a few billionaires and their sycophants find it.  The Covid-driven rush to rural areas won’t last. Urbanization has been a secular trend for 500 years because cities are wealth-creation machines. Back in the 1840s living in a city cut a decade off your lifespan, and people still flocked to them. We’ll get past Covid. Turning the page I’m looking 40 years ahead because, as of September 1, 2021, I’m retiring from the computer industry and more than 14 years at ZDNet. Joining the industry began a major new chapter in my life.  Now it’s time for another chapter. Literally.  I intend to finish the historical novel I’ve been working on for the last six years. I’ve put a draft of the first chapter online here. Warning: if you need American history sugar coated, this isn’t for you. As excited as I am about this next chapter in my life and work, I’ve also engaged in some anticipatory nostalgia. I’ll miss people I’ve met at CES, NAB, the Usenix FAST conference, and the excellent Non-volatile Memory Workshop at UC San Diego, and now, the people I will never meet — like most of the rest of the excellent ZDNet crew. As I turn the page on this chapter of my life, I’m looking forward to continuing to learn and grow as best I can. May it ever be so for you as well. Comments welcome. What turning points are you looking forward to? source

Predictions: Where will storage be in 40 years? Read More »

Intel Core Ultra 200S desktop processor debuts for AI PCs for enthusiasts

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel launched the new Intel Core Ultra 200S series processor family that will scale AI PC capabilities to desktop platforms and usher in the first enthusiast desktop AI PCs. Led by the Intel Core Ultra 9 processor 285K, the latest generation of enthusiast desktop processors includes five unlocked desktop processors equipped with up to 8 next-gen Performance-cores (P-cores), the fastest cores available for desktop PCs1, and up to 16 next-gen Efficient-cores (E-cores) that altogether result in up to 14% more performance in multi-threaded workloads than the previous generation. The new family are the first neural processing unit (NPU)-enabled desktop processors for enthusiasts and come with a built-in Xe GPU with state-of-the-art media support. “The new Intel Core Ultra 200S series processors deliver on our goals to significantly cut power usage while retaining outstanding gaming performance and delivering leadership compute. The result is a cooler and quieter user experience elevated by new AI gaming and creation capabilities enabled by the NPU, and leadership media performance that leverages our growing graphics portfolio,” said Robert Hallock, vice president and general manager of AI and Technical Marketing at the Client Computing Group at Intel, in a statement. Why it matters Thanks to the latest Intel core and efficiency innovations, Intel Core Ultra 200S desktop processors deliver a landmark reduction in power usage with up to 58% lower package power in everyday applications4 and up to 165W lower system power5 while gaming. The new processor family combines improved efficiency with improved performance, also delivering up to 6% faster single-threaded6 and up to 14% faster multi-threaded performance over the previous generation. With complete AI capabilities powered by the CPU, GPU and NPU, enthusiasts get the intelligent and powerful performance they need for content creation and gaming, all with a reduced energy footprint. By bringing the AI PC to enthusiasts for the first time, the Intel Core Ultra 200S series processors deliver up to 50% faster performance in AI-enabled creator applications against competing flagship processors. The newly available NPU enables offloading of AI functions. Examples include freeing up discrete GPUs to increase gaming frame rates, significantly reducing power usage in AI workloads, and enabling accessibility use cases such as face- and gesture-tracking in games while minimizing performance impact. Intel’s 1st AI PC for enthusiasts Performance gains for Intel’s latest desktop AI PC processor. With up to 36 platform TOPS, the new Intel Core Ultra 200S series processor is Intel’s first and ​best desktop processor ​for AI PCs​. The Complete Enthusiast Solution: Intel Core Ultra 200S series processors offer excellent performance in AI and content creation, and power an immersive gaming experience, with up to 28% gaming performance uplift compared to competing flagship processors. Intel Core Ultra 200S series processors’ use the new Intel 800 Series chipset, extending platform compatibility with up to 24 PCIe 4.0 lanes, up to 8 SATA 3.0 ports, and up to 10 USB 3.2 ports, empowering enthusiasts to take advantage of the latest connectivity, storage, and other technologies. Intel Core Ultra 200S series processors also bring new overclocking functionality with fine-grain controls, with top turbo frequency in 16.6 MHz steps for P-cores and E-cores. A new memory controller supports fast, new XMP and CUDIMM DDR5 memory up to 48GB per DIMM for up to 192GB in total, and the Intel Extreme Tuning Utility now includes one-click overclocking enhancements. And Intel Core Ultra 200S processors come equipped with 20 CPU PCIe 5.0 lanes, 4 CPU PCIe 4.0 lanes, support for 2 integrated Thunderbolt 4 ports, Wi-Fi 6E​ and Bluetooth 5.3​. Intel Killer Wi-Fi delivers supercharged wireless performance and enables seamless, immersive online gameplay through application priority auto-detection, bandwidth analysis and management, and smart AP selection and switching. The machiens have multi-engine security. Intel Silicon Security Engine helps preserve data confidentiality and code integrity while maintaining high performance for demanding AI workloads. Intel Core Ultra 200S series processors will be available at retail online and in stores, and via OEM partner systems starting Oct. 24, 2024. source

Intel Core Ultra 200S desktop processor debuts for AI PCs for enthusiasts Read More »

Best computer science jobs

Many computer science careers offer roles to contract and freelance tech professionals. For example, game designers, web developers, and software developers may work on a contract basis for a specific project rather than as full-time employees.  Computer science professionals may prefer the flexibility of contract work. Rather than working for a single company, freelancers build a portfolio of clients and work on a variety of projects. That can mean designing custom websites for clients, contributing to a software testing project, or creating custom code. However, consider the benefits and drawbacks of contract positions before going freelance. Computer science contractor jobs do not pay benefits, for example. While contractors often earn a higher rate, they pay additional taxes. And contractors may find themselves out of work between projects.  source

Best computer science jobs Read More »