Google’s Gemini enterprise coding assistant shows enterprise-focused coding is growing

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google Cloud’s newest feature, Gemini Code Assist Enterprise aims to compete with GitHub’s enterprise-focused coding platform to explain local codebases and get more security.  Gemini Code Assist Enterprise, formerly Duet AI, lets developers code faster because it understands their organization’s codebase, has a large context window, and allows for customization. Developers can access the assistant for $45 per month per user or $19 monthly with a yearly subscription. “Developers can stay in flow state longer, bringing more insights directly to their IDEs, while also completing complex tasks like upgrading a Java version in an entire repo,” said Ryan J. Salva, senior director of Developer Tools and Operations, Google Cloud in a blog post. “This means developers get to focus on creative problem-solving, leading to greater job satisfaction while you get a faster time-to-market, gaining a competitive edge.” The platform offers code suggestions based on local codebases. Google said the large context window helps developers “generate or transform code that’s more relevant to your application.” The coding assistant can connect directly to other Google Cloud services like Firebase, Databases, BigQuery, Colab Enterprise, Apigee and Application Integration. Salva said this is to meet developers where they are since “the more services it touches, the faster your builders can create and deliver applications.” The code customization is based on internal libraries so Code Assist can help make custom code suggestions. It will index GitHub and GitLab libraries and support self-hosted libraries early next year. “A code assistant dramatically reduces the time to ramp on new technologies and incorporates the nuances of an organization’s coding standards into the suggestions it provides,” Salva wrote. However, Google’s biggest selling point for coding assistants is its enterprise-grade security. It extends Google’s promise that it won’t use customer data to train its Gemini models. It also promises that users have complete control over which repositories the code assistant will index, and they can purge data anytime. Google will also offer indemnification — legal cover for any potential lawsuit — for any code generated by Gemini Code Assist Enterprise.  Enterprise-focused coding assistants Coding assistance, of course, is nothing new for generative AI. However, as more enterprises hope to integrate coding assistants into their technology stack, providers hope to tailor their offerings to them.  GitHub released an enterprise-focused Copilot called GitHub Copilot Enterprise in February, largely offering similar features. Oracle’s coding assistant focuses on Java and SQL enterprise applications. Other companies, like Harness, also released coding assistants that give real-time suggestions and target businesses. Harness’s assistant is built off Gemini.  Google’s entering the fray underscores the increasing competition in coding assistants and the need to make enterprise-specific solutions even for a task most chatbots can readily do. Moving coding assistants from separate chatbots and integrating these into developer environments or in Google’s case other channels gives flexibility to companies looking to improve productivity. The more developers can quickly test code and maybe fix bugs on local codebases, the faster companies can move and deploy applications.  source

Google’s Gemini enterprise coding assistant shows enterprise-focused coding is growing Read More »

Acknowledgments

This analysis was produced by Pew Research Center as part of the Pew-Templeton Global Religious Futures project, which analyzes religious change and its impact on societies around the world. Funding for the Global Religious Futures project comes from The Pew Charitable Trusts and the John Templeton Foundation (grant 63095). This publication does not necessarily reflect the views of the John Templeton Foundation. Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. This report is a collaborative effort based on the input and analysis of the following individuals. Find related reports online at pewresearch.org/religion. Primary Researchers Jonathan Evans, Senior ResearcherKelsey Jo Starr, Research Analyst Research Team Becka A. Alper, Senior ResearcherLaura Clancy, Research AnalystAlan Cooperman, Director, Religion ResearchManolo Corichi, Research AnalystMoira Fagan, Research AssociateJanell Fetterolf, Senior ResearcherSneha Gubbala, Research AssistantChristine Huang, Research AssociateAsta Kallo, Research AssistantKirsten Lesage, Research AssociateJordan Lippert, Research AnalystWilliam Miner, Research AnalystBesheer Mohamed, Senior ResearcherJustin Nortey, Research AnalystJacob Poushter, Associate Director, Global Attitudes ResearchAndrew Prozorovsky, Research AssistantSofia Hernandez Ramones, Research AssistantMichael Rotolo, Research AssociateLaura Silver, Associate Director, Global Attitudes Research     Maria Smerkovich, Research AssociateGregory A. Smith, Senior Associate Director, Religion ResearchPatricia Tevington, Research AssociateRichard Wike, Director, Global Attitudes Research Methods Team Dorene Asare-Marfo, Panel ManagerAnna Brown, Research MethodologistScott Keeter, Senior Survey AdvisorCourtney Kennedy, Vice President, Methods and InnovationArnold Lau, Research MethodologistCarolyn Lau, International Research MethodologistAndrew Mercer, Principal MethodologistPatrick Moynihan, Associate Director, International Research MethodsGeorgina Pizzolitto, Research MethodologistDana Popky, Associate Panel ManagerSofi Sinozich, International Research Methodologist Editorial and Graphic Design Jeff Diamant, Senior Writer/EditorRebecca Leppert, Copy EditorBill Webster, Senior Information Graphics Designer Communications and Web Publishing Achsah Callahan, Communications ManagerJustine Coleman, Associate Digital ProducerAndrew Grant, Communications AssociateAnna Schiller, Associate Director, Communications In addition, Pew Research Center is grateful for many others who provided valuable advice and assistance on this project, including Rebecca Kielty and Brianna Vetter. Former Center staffer Sarah Austin also contributed to this report. We appreciate the following individuals for advising us on strategic outreach: Eugenia Mitchelstein, associate professor of communication at Universidad de San Andrés (Argentina), and Sebastián Lacunza, columnist at elDiarioAR.com (Argentina). source

Acknowledgments Read More »

AI21 CEO says transformers not right for AI agents due to error perpetuation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As more enterprise organizations look to the so-called agentic future, one barrier may be how AI models are built. For enterprise AI developer A121, the answer is clear, the industry needs to look to other model architectures to enable more efficient AI agents.  Ari Goshen, AI21 CEO, said in an interview with VentureBeat that Transformers, the most popular model architecture, has limitations that would make a multi-agent ecosystem difficult. “One trend I’m seeing is the rise of architectures that aren’t Transformers, and these alternative architectures will be more efficient,” Goshen said. “Transformers function by creating so many tokens that can get very expensive.”  AI21, which focuses on developing enterprise AI solutions, has made the case before that Transformers should be an option for model architecture but not the default. It is developing foundation models using its JAMBA architecture, short for Joint Attention and Mamba architecture. It is based on the Mamba architecture developed by researchers from Princeton University and Carnegie Mellon University, which can offer faster inference times and longer context.  Goshen said alternative architectures, like Mamba and Jamba, can often make agentic structures more efficient and, most importantly, affordable. For him, Mamba-based models have better memory performance, which would make agents, particularly agents that connect to other models, work better.  He attributes the reason why AI agents are only now gaining popularity — and why most agents have not yet gone into product — to the reliance on LLMs built with transforms.  “The main reason agents are not in production mode yet is reliability or the lack of reliability,” Goshen said. “When you break down a transformer model, you know it’s very stochastic, so any errors will perpetuate.” Enterprise agents are growing in popularity AI agents emerged as one of the biggest trends in enterprise AI this year. Several companies launched AI agents and platforms to make it easy to build agents.  ServiceNow announced updates to its Now Assist AI platform, including a library of AI agents for customers. Salesforce has its stable of agents called Agentforce while Slack has begun allowing users to integrate agents from Salesforce, Cohere, Workday, Asana, Adobe and more.  Goshen believes that this trend will become even more popular with the right mix of models and model architectures.  “Some use cases that we see now, like question and answers from a chatbot, are basically glorified search,” he said. “I think real intelligence is in connecting and retrieving different information from sources.” Goshen added that AI21 is in the process of developing offerings around AI agents. Other architectures vying for attention Goshen strongly supports alternative architectures like Mamba and AI21’s Jamba, mainly because he believes transformer models are too expensive and unwieldy to run.  Instead of an attention mechanism that forms the backbone of transformer models, Mamba can prioritize different data and assign weights to inputs, optimize memory usage, and use a GPU’s processing power.  Mamba is growing in popularity. Other open-source and open-weight AI developers have begun releasing Mamba-based models in the past few months. Mistral released Codestral Mamba 7B in July, and in August, Falcon came out with its own Mamba-based model, Falcon Mamba 7B.   However, the transformer architecture has become the default, if not standard, choice when developing foundation models. OpenAI’s GPT is, of course, a transformer model—it’s literally in its name—but so are most other popular models.  Goshen said that, ultimately, enterprises want whichever approach is more reliable. But organizations must also be wary of flashy demos promising to solve many of their problems.  “We’re at the phase where charismatic demos are easy to do, but we’re closer to that than to the product phase,” Goshen said. “It’s okay to use enterprise AI for research, but it’s not yet at the point where enterprises can use it to inform decisions.” source

AI21 CEO says transformers not right for AI agents due to error perpetuation Read More »

Vera AI launches ‘AI Gateway’ to help companies safely scale AI without the risks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Vera AI Inc., a startup focused on responsible artificial intelligence deployment, announced today the general availability of its AI Gateway platform. The system aims to help organizations more quickly and safely implement AI technologies by providing customizable guardrails and model routing capabilities. “We’re really excited to be announcing the general availability of our model routing and guardrails platform,” said Liz O’Sullivan, CEO and co-founder of Vera, in an interview with VentureBeat. “We’ve been hard at work over the last year building something that could scalably and repeatably accelerate time to production for the kinds of business use cases that actually stand to generate a lot of excitement.” Vera AI’s policy configuration interface, showcasing the platform’s granular content moderation tools. The dashboard allows companies to customize AI safeguards, balancing the need for innovation with responsible content management — a key selling point in Vera’s mission to make AI deployment both efficient and ethical. (Credit: Vera) Bridging the gap: How Vera’s AI gateway tackles last-mile challenges The launch comes at a time when many companies are eager to adopt generative AI and other advanced AI technologies, but remain hesitant due to potential risks and challenges in implementing safeguards. Vera’s platform sits between users and AI models, enforcing policies and optimizing costs across different types of AI requests. “Businesses are only ever interested in doing one of three things, whether that’s make more money, save more money, or reducing risk,” O’Sullivan explained. “We’ve focused ourselves squarely on the last mile problems, which people think, just like regular software engineering, that it’s going to be quick and easy, that these are just afterthoughts that you can apply to optimize costs or to reduce risks associated with things like disinformation and broad and CSAM, but they’re actually quite hard.” Justin Norman, CTO and co-founder of Vera, emphasized the importance of nuance in AI policy implementation: “You want to be able to set the bar for where your system will respond and where it will not respond and what it will do, without having to rely upon what some other companies made a decision for you on.” Vera AI’s interface demonstrates its content moderation capabilities, blocking a user’s input that failed to follow the specified rules — a key feature in the company’s mission to provide guardrails for responsible AI deployment. (Credit: Vera) From AI safety activism to startup success: The minds behind Vera The company’s approach appears to be gaining traction. According to O’Sullivan, Vera is already “processing tens of thousands of model requests per month across a handful of paying customers.” The startup offers API-based pricing at one cent per call, aligning its incentives with customer success in AI deployment. Additionally, Vera has introduced a 30-day free trial, which can be accessed using the code “FRIENDS30,” allowing potential customers to experience the platform’s capabilities firsthand. Vera’s launch is particularly noteworthy given the founders’ backgrounds. O’Sullivan, who serves on the National AI Advisory Committee, has a history of AI safety activism, including her work at Clarifai. Norman brings experience from government, academia, and industry, including PhD work at UC Berkeley focused on AI robustness and evaluation. Navigating the AI safety landscape: Vera’s role in responsible innovation As AI adoption accelerates across industries, platforms like Vera’s could play a crucial role in addressing safety and ethical concerns while enabling innovation. The startup’s focus on customizable guardrails and efficient model routing positions it well to serve both enterprise clients managing internal AI use and companies developing consumer-facing AI applications. However, Vera faces a competitive landscape with other AI safety and deployment startups also vying for market share. The company’s success will likely depend on its ability to demonstrate clear value to customers and stay ahead of rapidly evolving AI technologies and associated risks. For organizations looking to responsibly implement AI, Vera’s launch offers a new option to consider. As O’Sullivan put it, “We’re here to make it as easy as possible to enjoy the benefits of AI while reducing the risks that things do go wrong.” source

Vera AI launches ‘AI Gateway’ to help companies safely scale AI without the risks Read More »

Many Catholics in the U.S. and Latin America Want the Church to Allow Birth Control and to Let Women Become Priests

Most view Pope Francis favorably, though his ratings have dropped Pope Francis waves to the crowd as he arrives in the popemobile to celebrate an open-air Mass in Villavicencio, Colombia, on Sept. 8, 2017. (Alberto Pizzoli/AFP via Getty Images) This Pew Research Center analysis explores views on the Catholic Church and Pope Francis among Catholics in Latin America and the United States. All seven countries in the survey have Catholic populations that rank among the world’s 25 largest – notably including Brazil (largest), Mexico (second-largest) and the U.S. (fourth-largest) – according to the Vatican’s 2021 Statistical Yearbook of the Church. And the six Latin American countries surveyed account for roughly three-quarters of the region’s Catholics. For non-U.S. data, this analysis draws on nationally representative surveys of 6,234 adults – including 3,655 Catholics – conducted from Jan. 22 to April 27, 2024. Surveys were conducted face-to-face in Argentina, Brazil, Chile, Colombia, Mexico and Peru. In the U.S., we surveyed 12,693 respondents from Feb. 13 to 25, 2024, including 2,021 Catholics. Most of the survey’s respondents (10,642) – including all of the survey’s Catholic respondents – are members of the American Trends Panel (ATP), an online survey panel recruited through national random sampling of residential addresses, which gives nearly all U.S. adults a chance of selection. Read more about the ATP’s methodology. The remaining respondents (2,051) are members of three other panels: the Ipsos KnowledgePanel, the NORC Amerispeak Panel and the SSRS Opinion Panel. All three are national survey panels recruited through random sampling (not “opt-in” polls). We used these additional panels to ensure that the survey would have enough Jewish and Muslim respondents to be able to report on their views. (While Jewish and Muslim respondents are not discussed in this particular report that focuses on Catholic topics, they are discussed in other reports based on this survey.) The U.S. data is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education, religious affiliation and other categories. Here are the questions used for the report, along with responses, and the survey methodology. This analysis was produced by Pew Research Center as part of the Pew-Templeton Global Religious Futures project, which analyzes religious change and its impact on societies around the world. Funding for the Global Religious Futures project comes from The Pew Charitable Trusts and the John Templeton Foundation (grant 63095). This publication does not necessarily reflect the views of the John Templeton Foundation. A new survey by Pew Research Center asked Catholics in six Latin American countries and the United States how they think the church should handle a variety of matters related to contraception, the priesthood and sexuality. Among the main findings: Most Catholics in all seven countries want the church to allow Catholics to use birth control. The shares saying this range from 86% in Argentina to 63% in Brazil. In most of the countries surveyed, majorities of Catholics also say the church should allow women to become priests. Opinion is more divided on whether the church should allow priests to get married. Roughly two-thirds of Catholics in Argentina, Chile and the U.S. are in favor, but majorities in Mexico and Peru say the church should not allow priests to marry. Views on whether the church should recognize the marriages of gay and lesbian couples vary among Catholics in the countries surveyed. Majorities of Catholics in Argentina and Chile say the church should recognize the marriages of gay and lesbian couples, and just over half of U.S. Catholics agree. In the other four countries, fewer than half take this stance. The survey also finds that Pope Francis, the first Latin American pope, remains broadly popular among Catholics across the region – though his favorability ratings are lower now than they were a decade ago, shortly after his papacy began in March 2013. The decrease in favorability has been sharpest among Catholics in Argentina, his country of birth. Ten years ago, nearly all Catholics surveyed there (98%) expressed a favorable opinion of Francis, compared with 74% today. And in the U.S., where a February 2014 survey found that 85% of Catholics viewed the pope favorably, 75% now take that view. Most Catholics surveyed also say Francis represents a change in the Catholic Church’s direction, with more of them calling it a major change than a minor one. These are among the key findings of a survey of 5,676 Catholics, conducted in English, Spanish and Portuguese from January through April 2024 in seven countries: Argentina, Brazil, Chile, Colombia, Mexico, Peru and the U.S. The rest of this report explores these findings in more detail. More than a dozen of our surveys have measured U.S. Catholics’ favorability toward Pope Francis since the start of his papacy. Find this more detailed U.S. trend in our recent report, “Majority of U.S. Catholics Express Favorable View of Pope Francis.” How we worded these questions We used simple, common phrases in the survey questions about some steps that Catholics would – or would not – like to see the church take. Our goal was to make the questions easy to understand for as many respondents as possible. In some cases, the wording of the questions involved a trade-off between broad understandability and theological nuance. For example, one question asks whether the church should “allow priests to get married.” This would not, strictly speaking, be a change in doctrine. The Catholic Church already allows married priests under certain circumstances, such as if a man was married before being ordained in an Eastern Catholic Church. Technically, the church considers the rule of celibacy for priests to be a “discipline” rather than a doctrine. Nonetheless, allowing parish priests to get married and continue in their duties would represent a big change in the everyday life of the church in the United States and Latin America.  Similarly, another question asks whether the church should allow unmarried Catholics who “are living with a romantic partner” to receive Communion. Actually, Catholicism has no rule against unmarried people living together. The church’s teaching

Many Catholics in the U.S. and Latin America Want the Church to Allow Birth Control and to Let Women Become Priests Read More »

4. Accuracy of election news

Most U.S. adults (73%) say they see inaccurate election news at least somewhat often, including 37% who say they see this extremely or very often. Only 3% of Americans say they don’t see inaccurate news about the election at all. By party Republicans and independents who lean toward the Republican Party are about twice as likely as Democrats and Democratic leaners to say they come across inaccurate election news extremely or very often (51% vs. 24%). Meanwhile, about a third of Democrats (36%) say they see inaccurate election news not too often or not at all, while just 14% of Republicans say the same. Conservative Republicans are more likely than Republicans who describe themselves as moderate or liberal to report seeing inaccurate news coverage about the election extremely or very often (60% vs. 37%). Hearing inaccurate election news in conversation News coverage is not the only place where Americans are seeing or hearing information about the presidential election that they consider inaccurate. About six-in-ten U.S. adults (58%) say they hear people share inaccurate information about the election in conversation at least somewhat often, including 27% who hear this extremely or very often. There are not substantial differences between the two major political parties on this question. Accuracy of news from primary sources Just 10% of U.S. adults report seeing inaccurate news coverage from their most-used sources extremely or very often, and 25% say they see this somewhat often. A majority (63%) say they have not seen inaccurate news coverage of the election often or at all from their most commonly used sources. By party Republicans are more likely than Democrats to say they see inaccurate election news from the sources they turn to most often. Still, fewer than half of Republicans (42%) say they see this at least somewhat often, including just 14% who say they extremely or very often see inaccurate election coverage from their primary sources. Determining what is true and what’s not Americans are split over how easy it is to discern what’s true about the presidential campaign. Around half (52%) say they generally find it difficult to determine whether election news is true or not, slightly more than the share who find it easy to determine (47%). These numbers are similar to the last time we asked this question in October 2020, when 55% of U.S. adults said it was difficult to distinguish truth from fiction. By party and ideology Just as they are more likely to report seeing inaccurate information about the election, Republicans also are more likely to say they find it tough to know what is true. Most Republicans (61%) say it is difficult to determine what is true and what is not, compared with 42% of Democrats who express this view. A majority of Democrats (58%) say they find it easy to distinguish truth from fiction when it comes to election news. Views also vary within each party by ideology: Moderate or liberal Republicans are more likely than conservative Republicans to say it’s difficult to determine whether election-related information is true or not. Among Democrats, liberals are especially likely to find it easy to sort out truth from fiction. source

4. Accuracy of election news Read More »

Methodology

The American Trends Panel survey methodology Overview Data in this report comes from Wave 155 of the American Trends Panel (ATP), Pew Research Center’s nationally representative panel of randomly selected U.S. adults. The survey was conducted from Sept. 16 to 22, 2024. A total of 9,680 panelists responded out of 10,627 who were sampled, for a survey-level response rate of 91%. The cumulative response rate accounting for nonresponse to the recruitment surveys and attrition is 3%. The break-off rate among panelists who logged on to the survey and completed at least one item is 1%. The margin of sampling error for the full sample of 9,680 respondents is plus or minus 1.3 percentage points. SSRS conducted the survey for Pew Research Center via online (n=9,391) and live telephone (n=289) interviewing. Interviews were conducted in both English and Spanish. To learn more about the ATP, read “About the American Trends Panel.” Panel recruitment Since 2018, the ATP has used address-based sampling (ABS) for recruitment. A study cover letter and a pre-incentive are mailed to a stratified, random sample of households selected from the U.S. Postal Service’s Computerized Delivery Sequence File. This Postal Service file has been estimated to cover 90% to 98% of the population. Within each sampled household, the adult with the next birthday is selected to participate. Other details of the ABS recruitment protocol have changed over time but are available upon request. Prior to 2018, the ATP was recruited using landline and cellphone random-digit-dial surveys administered in English and Spanish. A national sample of U.S. adults has been recruited to the ATP approximately once per year since 2014. In some years, the recruitment has included additional efforts (known as an “oversample”) to improve the accuracy of data for underrepresented groups. For example, Hispanic adults, Black adults and Asian adults were oversampled in 2019, 2022 and 2023, respectively. Sample design The overall target population for this survey was noninstitutionalized persons ages 18 and older living in the United States. All active panel members were invited to participate in this wave. Questionnaire development and testing The questionnaire was developed by Pew Research Center in consultation with SSRS. The web program used for online respondents was rigorously tested on both PC and mobile devices by the SSRS project team and Pew Research Center researchers. The SSRS project team also populated test data that was analyzed in SPSS to ensure the logic and randomizations were working as intended before launching the survey. Incentives All respondents were offered a post-paid incentive for their participation. Respondents could choose to receive the post-paid incentive in the form of a check or gift code to Amazon.com, Target.com or Walmart.com. Incentive amounts ranged from $5 to $15 depending on whether the respondent belongs to a part of the population that is harder or easier to reach. Differential incentive amounts were designed to increase panel survey participation among groups that traditionally have low survey response propensities. Data collection protocol The data collection field period for this survey was Sept. 16-22, 2024. Surveys were conducted via self-administered web survey or by live telephone interviewing.  For panelists who take surveys online: Postcard notifications were mailed to a subset on Sept. 16. Survey invitations were sent out in two separate launches: soft launch and full launch. Sixty panelists were included in the soft launch, which began with an initial invitation sent on Sept. 16. All remaining English- and Spanish-speaking sampled online panelists were included in the full launch and were sent an invitation on Sept. 17. Panelists participating online were sent an email invitation and up to two email reminders if they did not respond to the survey. ATP panelists who consented to SMS messages were sent an SMS invitation with a link to the survey and up to two SMS reminders. For panelists who take surveys over the phone with a live interviewer: Prenotification postcards were mailed on Sept. 11, and reminder postcards were mailed on Sept. 16. Soft launch took place on Sept. 16 and involved dialing until a total of four interviews had been completed. All remaining English- and Spanish-speaking sampled phone panelists’ numbers were dialed throughout the remaining field period. Panelists who take surveys via phone can receive up to six calls from trained SSRS interviewers. Data quality checks To ensure high-quality data, Center researchers performed data quality checks to identify any respondents showing patterns of satisficing. This includes checking for whether respondents left questions blank at very high rates or always selected the first or last answer presented. As a result of this checking, eight ATP respondents were removed from the survey dataset prior to weighting and analysis. Weighting The ATP data is weighted in a process that accounts for multiple stages of sampling and nonresponse that occur at different points in the panel survey process. First, each panelist begins with a base weight that reflects their probability of recruitment into the panel. These weights are then calibrated to align with the population benchmarks in the accompanying table to correct for nonresponse to recruitment surveys and panel attrition. If only a subsample of panelists was invited to participate in the wave, this weight is adjusted to account for any differential probabilities of selection. Among the panelists who completed the survey, this weight is then calibrated again to align with the population benchmarks identified in the accompanying table and trimmed at the 1st and 99th percentiles to reduce the loss in precision stemming from variance in the weights. Sampling errors and tests of statistical significance take into account the effect of weighting. The following table shows the unweighted sample sizes and the error attributable to sampling that would be expected at the 95% level of confidence for different groups in the survey. Sample sizes and sampling errors for other subgroups are available upon request. In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls. Dispositions and

Methodology Read More »

3. Voters' feelings about the 2024 campaign and election outcomes; concerns about political violence

With less than a month to go until Election Day, voters continue to express mostly negative opinions about the 2024 presidential campaign. Reflecting the closeness of the presidential race, the share of voters who think it is clear which candidate will win – which was already low in July – has edged lower. Harris and Trump supporters differ in their views of the importance of their candidate conceding if they lose, and they have sharply different expectations for how their candidate might handle a defeat. Following two assassination attempts against former President Donald Trump, there are widespread concerns about political violence. A majority of voters say the threat of violence against political leaders and their families is a major problem in the country.  Campaign widely seen as too negative – but few think it’s dull  Voters continue to describe the presidential campaign so far in mostly negative terms: 71% say that the campaign is too negative. Only 27% say it is not too negative. 62% say the campaign is not focused on important policy debates, while 37% say it is. Just 19% say that the campaign makes them feel proud of the country, while 79% say it does not make them feel proud. 68% of voters say the campaign is interesting, while 30% say it is dull. While voters continue to view the campaign negatively across most dimensions, an increasing share say it is focused on important policy debates. Nearly four-in-ten (37%) say it is focused on policy debates, up from 23% in July. Over the same period, the share of voters who say the campaign makes them feel proud of the country also has risen, from 12% to 19%. Views among Harris and Trump supporters For the most part, Harris and Trump supporters express similar views of the 2024 campaign. Majorities of both candidates’ supporters say it is too negative and comparable shares say it is focused on important policy debates. Similar shares of Harris and Trump supporters (20% each) say it makes them feel proud of the country. While majorities of both Harris and Trump supporters find the campaign interesting, Harris supporters are more likely to say this (74% vs. 65%). Since July, the increase in the shares of voters who say the campaign is focused on important policy debates and makes them feel proud has come largely among Harris supporters. Currently, 38% of Harris supporters say it is focused on policy. In July, when President Joe Biden was still the Democratic nominee, just 18% of his supporters said this. And while just 20% of Harris supporters say the campaign makes them feel proud of the country, that is nearly double the share of Biden supporters who said this in July (11%). Trump supporters’ views on some of these questions have shown less change. But over this period, there has been a 10 percentage point increase in the share of Trump supporters who say the campaign is too negative (from 61% to 71%). Is it clear who will win the presidential election? With a little less than a month before the 2024 election, just 14% of voters say that it is already clear who is going to win. An overwhelming 86% say it is not yet clear who is going to win. The share who says it is already clear who is going to win is down slightly from September, when 20% said it was already clear who was going to win. As was the case in September, Trump supporters (18%) are somewhat more likely than Harris supporters (10%) to say it is already clear who is going to win. Voters’ emotions if Harris or Trump won Voters overall have largely similar feelings about a possible Trump or Harris win in November. Roughly three-in-ten say they would feel relieved if Trump (33%) or Harris (31%) won in November, while less than two-in-ten say they would feel excited if Trump or Harris won (15% and 17%, respectively). Voters are slightly more likely to say they would feel angry with a possible Trump victory compared with a Harris one (25% vs. 21%, respectively), while they are slightly more likely to say they would feel disappointed with a Harris victory than a Trump one (30% vs. 26%). Among Harris supporters Roughly six-in-ten Harris supporters (62%) say they would feel relieved if Harris won in November, while about a third (35%) say they would feel excited. Harris supporters are more likely to say they would feel excited (35%) about the prospect of a Democratic victory than Clinton (24%) or Biden (23%) supporters were at similar points in the 2016 and 2020 elections. About half of Harris supporters (52%) say they would feel angry if Trump won in November, while 46% say they’d be disappointed. These feelings are nearly identical to the shares of Biden supporters who said the same four years ago (54% angry, 44% disappointed). Among Trump supporters A large majority of Trump supporters say they would feel relieved (65%) or excited (31%) if their preferred candidate won. This is similar to the shares who said they would feel relieved (64%) or excited (31%) in 2020. However, Trump supporters are more likely to say they would feel angry with a Harris victory in November than they were at a similar point in 2020 about a Biden victory (42% today, 31% in 2020). How important is it for Harris, Trump to concede if they lose When asked how important it is for each candidate to concede the election if they lose, majorities of registered voters say it is very or somewhat important for the losing candidate to publicly acknowledge the opposing candidate as the legitimate president of the country. However, Trump supporters are less likely than Harris supporters to say it is important that the losing candidate concede – particularly if Trump is the losing candidate. Majorities of both Harris and Trump supporters say it is important for the other candidate to concede if they lose the election: 87% of Harris

3. Voters' feelings about the 2024 campaign and election outcomes; concerns about political violence Read More »

AMD unveils AI-infused chips across Ryzen, Instinct and Epyc brands

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Speaking at an event in San Francisco, AMD CEO Lisa Su unveiled AI-infused chips across the company’s Ryzen, Instinct and Epyc brands, fueling a new generation of AI computing for everyone from business users to data centers. Throughout the event, AMD indirectly made references to rivals such as Nvidia and Intel by emphasizing its quest to provide technology that was open and accessible to the widest variety of customers, without an intent to lock those customers into proprietary solutions. AMD CEO says Turin is the world’s best server processor. Su said AI will boost our personal productivity, collaboration will become much better with things like real-time translate, and it will make life easier whether you are a creator or ordinary user. It will be processed locally, to protect your privacy, Su said. She noted the new AMD Ryzen AI Pro PCs will be CoPilot+-ready and offer up to 23 hours of battery life (and nine hours using Microsoft Teams). “We’ve been working very closely with AI PC ecosystem developers,” she said, noting more than 100 will be working on AI apps by the end of the year. Commercial AI mobile Ryzen processors AMD Ryzen AI Pro 300 Series processor. AMD announced its third generation commercial AI mobile processors, designed specifically to transform business productivity with Copilot+ features including live captioning and language translation in conference calls and advanced AI image generators. If you really wanted to, you could use AI-based Microsoft Teams for up to nine hours on new laptops equipped with the AMD processors. The new Ryzen AI PRO 300 Series processors deliver industry-leading AI compute, with up to three times the AI performance than the previous generation of AMD processors. More than 100 products using the Ryzen processors are on the way through 2025. Enabled with AMD PRO Technologies, the Ryzen AI PRO 300 Series processors offer high security and manageability features designed to streamline IT operations and ensure exceptional ROI for businesses. Ryzen AI PRO 300 Series processors feature new AMD Zen 5 architecture, delivering outstanding CPU performance, and are the world’s best line up of commercial processors for Copilot+ enterprise PCs5. Zen, now in its fifth generation, has been the foundation behind AMD’s own financial recovery, its gains in market share against Intel, and Intel’s own subsequent hard times and layoffs. “I think the best is that AMD continue to execute on a solid product roadmap. Unfortunately they are making performance comparisons to the competition’s previous generation products,” said Jim McGregor, an analyst at Tirias Research, in an email to VentureBeat. “So, we have to wait and see how the products will compare. However, I do expect them to be highly competitive especially the processors. Note that AMD only announced a new architecture for nenetworking, everything else is evolutionary but that’s not a bad thing when you are in a strong position and gaining market share.” Laptops equipped with Ryzen AI PRO 300 Series processors are designed to tackle business’ toughest workloads, with the top-of-stack Ryzen AI 9 HX PRO 375 offering up to 40% higher performance and up to 14% faster productivity performance compared to Intel’s Core Ultra 7 165U, AMD said. With the addition of XDNA 2 architecture powering the integrated NPU (the neural processing unit, or AI-focused part of the processor), AMD Ryzen AI PRO 300 Series processors offer a cutting-edge 50+ NPU TOPS (Trillions of Operations Per Second) of AI processing power, exceeding Microsoft’s Copilot+ AI PC requirements and delivering exceptional AI compute and productivity capabilities for the modern business. Built on a 4 nanometer (nm) process and with innovative power management, the new processors deliver extended battery life ideal for sustained performance and productivity on the go. “Enterprises are increasingly demanding more compute power and efficiency to drive their everyday tasks and most taxing workloads. We are excited to add the Ryzen AI PRO 300 Series, the most powerful AI processor built for business PCs10 , to our portfolio of mobile processors,” said Jack Huynh, senior vice president and general manager of the computing and graphics group at AMD, in a statement. “Our third generation AI-enabled processors for business PCs deliver unprecedented AI processing capabilities with incredible battery life and seamless compatibility for the applications users depend on.” AMD expands commercial OEM ecosystem OEM partners continue to expand their commercial offerings with new PCs powered by Ryzen AI PRO 300 Series processors, delivering well-rounded performance and compatibility to their business customers. With industry leading TOPS, the next generation of Ryzen processor-powered commercial PCs are set to expand the possibilities of local AI processing with Microsoft Copilot+. OEM systems powered by Ryzen AI PRO 300 Series are expected to be on shelf starting later this year. “Microsoft’s partnership with AMD and the integration of Ryzen AI PRO processors into Copilot+ PCs demonstrate our joint focus on delivering impactful AI-driven experiences for our customers. The Ryzen AI PRO’s performance, combined with the latest features in Windows 11, enhances productivity, efficiency, and security,” said Pavan Davuluri, corporate vice president for Windows+ Devices at Microsoft, in a statement. “Features like Improved Windows Search, Recall, and Click to Do make PCs more intuitive and responsive. Security enhancements, including the Microsoft Pluton security processor and Windows Hello Enhanced Sign-in Security, help safeguard customer data with advanced protection. We’re proud of our strong history of collaboration with AMD and are thrilled to bring these innovations to market.” “In today’s AI-powered era of computing, HP is dedicated to delivering powerful innovation and performance that revolutionizes the way people work,” said Alex Cho, president of Personal Systems at HP, in a statement. “With the HP EliteBook X Next-Gen AI PC, we are empowering modern leaders to push boundaries without compromising power or performance. We are proud to expand our AI PC lineup powered by AMD, providing our commercial customers with a truly personalized experience.” “Lenovo’s partnership with AMD continues to

AMD unveils AI-infused chips across Ryzen, Instinct and Epyc brands Read More »

Anthropic challenges OpenAI with affordable batch processing

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic, a leading artificial intelligence company, launched its new Message Batches API on Tuesday, allowing businesses to process large volumes of data at half the cost of standard API calls. This new offering handles up to 10,000 queries asynchronously within a 24-hour window, marking a significant step towards making advanced AI models more accessible and cost-effective for enterprises dealing with big data. Introducing the Message Batches API—a cost-effective way to process vast amounts of queries asynchronously. You can submit batches of up to 10,000 queries at a time. Each batch is processed within 24 hours and costs 50% less than standard API calls. https://t.co/nkXG9NCPIs — Anthropic (@AnthropicAI) October 8, 2024 The AI economy of scale: Batch processing brings down costs The Batch API offers a 50% discount on both input and output tokens compared to real-time processing, positioning Anthropic to compete more aggressively with other AI providers like OpenAI, which introduced a similar batch processing feature earlier this year. This move represents a significant shift in the AI industry’s pricing strategy. By offering bulk processing at a discount, Anthropic is effectively creating an economy of scale for AI computations. This could lead to a surge in AI adoption among mid-sized businesses that were previously priced out of large-scale AI applications. The implications of this pricing model extend beyond mere cost savings. It could fundamentally alter how businesses approach data analysis, potentially leading to more comprehensive and frequent large-scale analyses that were previously considered too expensive or resource-intensive. Model Input Cost (per 1M tokens) Output Cost (per 1M tokens) Context Window GPT-4o $1.25 $5.00 128K Claude 3.5 Sonnet $1.50 $7.50 200K Pricing Comparison: GPT-4o vs. Claude’s Premium Models; Costs shown per million tokens (Table Credit: VentureBeat) From real-time to right-time: Rethinking AI processing needs Anthropic has made the Batch API available for its Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku models through the company’s API. Support for Claude on Google Cloud’s Vertex AI is expected soon, while customers using Claude through Amazon Bedrock can already access batch inference capabilities. The introduction of batch processing capabilities signals a maturing understanding of enterprise AI needs. While real-time processing has been the focus of much AI development, many business applications don’t require instantaneous results. By offering a slower but more cost-effective option, Anthropic is acknowledging that for many use cases, “right-time” processing is more important than real-time processing. This shift could lead to a more nuanced approach to AI implementation in businesses. Rather than defaulting to the fastest (and often most expensive) option, companies may start to strategically balance their AI workloads between real-time and batch processing, optimizing for both cost and speed. The double-edged sword of batch processing Despite the clear benefits, the move towards batch processing raises important questions about the future direction of AI development. While it makes existing models more accessible, there’s a risk that it could divert resources and attention from advancing real-time AI capabilities. The trade-off between cost and speed is not new in technology, but in the field of AI, it takes on added significance. As businesses become accustomed to the lower costs of batch processing, there may be less market pressure to improve the efficiency and reduce the cost of real-time AI processing. Moreover, the asynchronous nature of batch processing could potentially limit innovation in applications that rely on immediate AI responses, such as real-time decision making or interactive AI assistants. Striking the right balance between advancing both batch and real-time processing capabilities will be crucial for the healthy development of the AI ecosystem. As the AI industry continues to evolve, Anthropic’s new Batch API represents both an opportunity and a challenge. It opens up new possibilities for businesses to leverage AI at scale, potentially increasing access to advanced AI capabilities. At the same time, it underscores the need for a thoughtful approach to AI development that considers not just immediate cost savings, but long-term innovation and diverse use cases. The success of this new offering will likely depend on how well businesses can integrate batch processing into their existing workflows and how effectively they can balance the trade-offs between cost, speed, and computational power in their AI strategies. source

Anthropic challenges OpenAI with affordable batch processing Read More »