Shojib

A Microsoft employee quit. Then the company completely broke the rules

Getty Images I sometimes wonder how often managers in tech look at their direct reports and bet on who will quit first. And next. To be a tech employee is to be coveted and cosseted. To quit, however, is to be shunned. You are, after all, causing a problem for your bosses — and offering a reflection of their management skills.  That, at least, was always my impression. Yet I was recently assaulted by a curiously uplifting tale from, of all curious uplifting companies: Microsoft. Ben Armstrong, group program manager of Microsoft’s Azure Kubernetes Service, was so proud of his company that he had to deposit the story in everyone’s most pride-filled arena: Twitter. He told of an employee who quit to go to a rival. Tech companies would much rather you created a freakish startup — in which they can invest — rather than go to some dreaded enemy. Armstrong says he told the employee that as they were going to a rival company, their departure would likely be fast-tracked. On the employee’s last day, however, there was a family emergency. The employee needed to get on a plane immediately and fly to another country. Here was the problem: the employee was on an H1-B visa. So, if they flew without effective employment, America wouldn’t let them back in. The employee asked if there was anything Microsoft could do to help. I can think of one or two companies that would say: “Sorry, see ya, wouldn’t wanna be ya.” Armstrong himself wasn’t optimistic: “I told them we could try, but I was not hopeful.” Oh, Ben. You didn’t think Microsoft is a compassionate company? Bill Gates doesn’t work there anymore. Armstrong seems to have been surprised at the company’s alertness to another human’s plight. He said: “Two hours later we were in a call with an HR Director at MSFT; who immediately agreed that while this was against MSFT policy, that was not important here. What was important was that this person needed to be with their family.” And so Microsoft agreed, despite the paperwork of departure having already been signed, to keep their departing employee for another week. I confess I found this oddly heartening. To put aside any potential resentment and to consider the simple human situation was surprisingly commendable. Naturally, there were various Twittered positions. Many praised Microsoft’s readiness to break its own rules for the sake of a departing employee. Some suggested it was a fine way to make that employee feel they could return to Microsoft one day. One offered that this behavior may not have always been associated with Microsoft in the past. Scott Rich, the senior security engineer for Sentinel One Partnerships, mused: “When I announced my intent to leave MS, the CISO and Security Director stopped talking to me overnight. 2 weeks later, I ran into one of them where I was told in a spiteful voice ‘good luck’. “ He added: “2 years later we had the most successful cybersecurity IPO in history.” I was especially moved, though, by a comment from a rival. Massimo Re Ferrè, who styles himself as chief psychology officer, container team at AWS Cloud, offered this wise perspective: “My comment is not directed to MS but it’s sad that we live in times where it would have been ‘normal’ to do nothing and ‘amazing’ to do what it is mere common sense and minimum level of humanity.” I fear some might want to remind him that AWS is part of Amazon. More importantly, he’s right. The very fact that this act was, in some way, extraordinary does offer a dim view of where the corporate world has sunk. Too often, tech companies utter rote platitudes about their human focus, yet think nothing of instantly firing employees. Collectively, on a Zoom call. Too often, they have policies that say: “Hey, you quit, so too bad.” Which, I suppose, Microsoft still does. source

A Microsoft employee quit. Then the company completely broke the rules Read More »

Who U.S. Adults Follow on TikTok

Adult TikTok users in the U.S. use the platform to follow pop culture and entertainment accounts much more than news and politics (Michael M. Santiago/Getty Images) This study seeks to better understand the accounts that U.S. adults choose to follow on TikTok. The TikTok user experience happens largely within the site’s For You page, a feed that is “unique and tailored to each specific individual” based on many factors, such as the behavior they display while on the site. The Following page, by contrast, is constructed directly from the contents of accounts the user follows. However, user interactions with posts from the accounts they follow play a nontrivial role in shaping their For You page. And studying these followed accounts can give us a better understanding on the content that users actively choose to look for on the platform. To conduct this analysis, we started with a representative sample of U.S. adult TikTok users who gave us a valid account handle (their unique account name preceded by an “@” sign) for research purposes. All of these users are members of the Center’s American Trends Panel (ATP). For the 664 such users whose profile publicly displays the accounts they follow, we collected a list of all their followed accounts. That produced an initial list of 227,946 unique accounts. We then collected any available profile information for those accounts, such as their display name, bio and the number of followers they have. This collection occurred April 8-16, 2024. We also collected up to five of their most recent posts (if available). This content data collection was conducted June 14-20, 2024. Using a combination of human coding and machine classification with Large Language Models (LLMs), we then categorized those accounts into categories based on whom the account belongs to and the type of content they post. For more details on the categories we included in the analysis and how this data collection and classification was conducted, refer to the methodology of this report. There are a variety of ways to categorize the different types of prominent accounts present on a social media platform. Here are some of the terms and definitions we have adopted for this study: Followed accounts – Any TikTok account followed by any given user. The accounts a user follows on TikTok appear in that user’s “Following” list. Following page – A content feed on TikTok that consists solely of the content posted by the accounts that a given user follows. For You page (FYP) – A content feed on TikTok that is algorithmically curated to each user, based on their interests and behaviors on the platform. The For You page is the default feed that is served to users as they visit the platform. The FYP may include posts from accounts that a given user follows, but it typically contains recommended content from other accounts beyond the user’s following list. Influencers and content creators – Used interchangeably to refer to accounts with at least 5,000 followers on TikTok who attained their popularity primarily due to their presence on the internet, especially on social media (often described as “internet-native”), as opposed to those with a significant level of public awareness outside of social media (such as movie stars, professional athletes or politicians). Mega influencers and internet celebrities – Refers to influencer or creator accounts with at least 1,000,000 followers on TikTok. Mid-tier individual influencers and creators – Refers to influencer or creator accounts that appear to belong to an individual who have between 5,000 to 1,000,000 followers on TikTok. Small accounts – Refers to accounts with fewer than 5,000 followers on TikTok. These accounts are typically maintained by individual users as a personal profile on the site, are more often set to private, or often have not posted very much content. Entertainers, celebrities and other pop culture personalities – Refers to accounts that appear to belong to traditional celebrities and notable figures from pop culture or the entertainment industry, including movie stars, bands or musicians, professional athletes, comedians and more, who attained their fame primarily outside of social media. Journalists, pundits and media outlets – Refers to accounts that belong to professionals in the news media, either the official accounts of specific outlets or shows, or individual journalists, commentators or pundits. For this study, “journalists” are individuals with a current affiliation to a news organization that is listed in their account bio. Politicians, public officials and government agencies – Refers to accounts that belong to either government agencies, or individual politicians or officials. For this study, “politicians” are individuals who have ever held elected office or are currently running for elected office. Accounts that post about … – Topic labels in this study, such as “news,” “politics” or “pop culture and entertainment,” refer to accounts that were observed mentioning a given topic in their most recent posts as of the content collection period for this study (June 14-20, 2024). Accounts do not need to primarily post about a given topic to receive any of the content topic categorizations used in this study. Any mention is sufficient. For further discussion of the account type and topic categories used in the study, refer to the report methodology. A new Pew Research Center analysis of the accounts Americans follow on TikTok highlights the centrality of internet-native content creators, prominent influencers and traditional celebrities on the popular short-form video platform. It also finds that users choose to follow far more accounts that post about pop culture and entertainment than those posting about news or politics. To conduct this analysis, we surveyed a nationally representative group of U.S. adults who gave us access to their TikTok handles and identified all the accounts those users follow. We then categorized all of those followed accounts based on who they are and what sorts of things they tend to post about. These are some of the main findings: What types of accounts do U.S. adults follow on TikTok? Broadly, they follow lots of creators and influencers who

Who U.S. Adults Follow on TikTok Read More »

Black Forest Labs releases Flux 1.1 Pro and an API

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Black Forest Labs (BFL), a startup founded by the creators of the popular Stable Diffusion AI image generation model that underpins many AI image generation apps and services (such as Midjourney), has announced the release of a new, faster text-to-image model called Flux 1.1 Pro, and with it, a paid application programming interface (API) on which developers can build third-party apps powered by the model (or incorporate it into their existing apps). This means that a company that offers creative tools can add Flux as an option to their offerings, if they (and by extension, their end users) are willing to pay the API costs. Individual users can access the new Flux 1.1 Pro model not through Black Forest Labs’s site, but rather, through partners together.ai, Replicate, fal.ai, and Freepik. Some of these services refer to the model under a different name, such as “Flux Fast.” No details were immediately provided about Flux 1.1 Pro’s training dataset, an issue of contention for generative AI companies with the original Stability AI and rival Midjourney being sued by artists who accuse the firms and others of violating their copyright by scraping and training en masse without consent or compensation on human-created images posted to the web. One key class action lawsuit against Stability AI and Midjourney remains in court. The news comes following the success of Flux’s initial open source text-to-image AI model which powers Elon Musk’s Grok 2 chatbot from xAI and available to subscribers of his social network X. Unlike its earlier model Flux.1, which was open source and free for anyone to download, fine-tune, customize, and otherwise use for all commercial or personal uses as they saw fit, the new Flux 1.1 Pro model appears to be, like Flux 1.0 Pro, a paid proprietary offering only. However, it is still available for commercial and enterprise usage. BFL sees the launch of its API and Flux 1.1 Pro as major steps in its growth as a company, offering both developers and enterprises access to powerful and customizable tools for image generation. Codenamed “Blueberry,” Flux 1.1 Pro takes the new top spot on the Artificial Analysis image arena leaderboard Flux 1.1 Pro improves on the earlier Flux 1.0 Pro model by delivering six times faster generation speeds, while also enhancing image quality, prompt adherence, and diversity. It enables workflows that prioritize speed without sacrificing quality, generating output three times faster than its predecessor. Additionally, BFL announced an update for the original Flux 1.0 Pro, doubling its generation speed to improve efficiency across the board. The performance of Flux 1.1 Pro has been validated through its secret debut on Artificial Analysis, an independent third party benchmark platform for comparing AI model performance, where the model was tested in the days prior to today’s announcement under the code name “blueberry.” (Some erroneously speculated on X that this was OpenAI testing Sora following its tests of the o1 LLM as “strawberry.”) As of October 1, 2024, Flux 1.1 Pro holds the highest ELO score on the platform at 1153, surpassing other generative models in terms of visual fidelity and prompt accuracy, including Midjourney 6.1 (ELO score of 1100) and Ideogram v2 (score of 1108). The ELO third-party benchmark was established earlier this summer of 2024 by Artificial Analysis co-founder and CEO Micah Hill-Smith and co-founder and Product Lead George Cameron, and uses human ratings of pairs of images to derive its scores. For users demanding high-resolution outputs, Flux 1.1 Pro will soon support ultra-high-resolution images (up to 2k), maintaining its precision and speed through upcoming API updates. BFL API offers developers AI image generation starting at 4 cents per image Complementing the Flux 1.1 Pro release is the BFL API in beta, which brings BFL’s generative capabilities directly to businesses and developers looking to integrate state-of-the-art image generation into their own applications. The API offers advanced customization, enabling users to adjust model choice, resolution, and content moderation to meet their specific needs. It also promises scalability, making it suitable for projects ranging from small-scale to enterprise-level. BFL’s API comes with competitive pricing, making it attractive for users seeking high-quality outputs without excessive costs. For example, the Flux 1.1 Pro image generation is priced at USD $0.04 per image, while the older Flux 1.0 Pro is available at $0.05 per image. Developers can begin integrating the API today, and BFL promises ongoing improvements as the beta progresses. The company envisions its API opening the door to countless creative applications, especially in industries like design, advertising, and entertainment, where demand for high-quality AI-generated media continues to grow. Building on initial strong success Black Forest Labs is no stranger to the spotlight. Just two months earlier, the company secured $31 million in seed funding, led by Andreessen Horowitz (a16z), with backing from high-profile investors such as Brendan Iribe, Michael Ovitz, and Garry Tan. As reported by VentureBeat, the launch of BFL and its earlier Flux 1.0 model was widely seen as a milestone in the AI community. BFL co-founders Robin Rombach, Patrick Esser, and Andreas Blattmann brought their expertise from Stability AI, the team behind Stable Diffusion, into this new venture, with a vision for more accessible, open-source generative AI tools. Flux 1.0, which came in three variants (Flux 1.0 Pro, Flux 1.0 Dev, and Flux 1.0 Schnell), gained early praise for its 12-billion parameter architecture and its ability to match or even surpass the output quality of competing models like MidJourney and DALL-E. The open-source nature of these models, especially Flux 1.0 Dev and Flux 1.0 Schnell, positioned BFL as a critical player in the debate over open-source versus proprietary AI. Industry context and competition Black Forest Labs’ move to launch Flux 1.1 Pro comes at a time of heightened competition in the generative AI media space, with many creators looking to harness text-to-image AI models alongside image-to-video models such as those from Pika, Runway, and Luma. Midjourney and

Black Forest Labs releases Flux 1.1 Pro and an API Read More »

Google’s Gemini enterprise coding assistant shows enterprise-focused coding is growing

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google Cloud’s newest feature, Gemini Code Assist Enterprise aims to compete with GitHub’s enterprise-focused coding platform to explain local codebases and get more security.  Gemini Code Assist Enterprise, formerly Duet AI, lets developers code faster because it understands their organization’s codebase, has a large context window, and allows for customization. Developers can access the assistant for $45 per month per user or $19 monthly with a yearly subscription. “Developers can stay in flow state longer, bringing more insights directly to their IDEs, while also completing complex tasks like upgrading a Java version in an entire repo,” said Ryan J. Salva, senior director of Developer Tools and Operations, Google Cloud in a blog post. “This means developers get to focus on creative problem-solving, leading to greater job satisfaction while you get a faster time-to-market, gaining a competitive edge.” The platform offers code suggestions based on local codebases. Google said the large context window helps developers “generate or transform code that’s more relevant to your application.” The coding assistant can connect directly to other Google Cloud services like Firebase, Databases, BigQuery, Colab Enterprise, Apigee and Application Integration. Salva said this is to meet developers where they are since “the more services it touches, the faster your builders can create and deliver applications.” The code customization is based on internal libraries so Code Assist can help make custom code suggestions. It will index GitHub and GitLab libraries and support self-hosted libraries early next year. “A code assistant dramatically reduces the time to ramp on new technologies and incorporates the nuances of an organization’s coding standards into the suggestions it provides,” Salva wrote. However, Google’s biggest selling point for coding assistants is its enterprise-grade security. It extends Google’s promise that it won’t use customer data to train its Gemini models. It also promises that users have complete control over which repositories the code assistant will index, and they can purge data anytime. Google will also offer indemnification — legal cover for any potential lawsuit — for any code generated by Gemini Code Assist Enterprise.  Enterprise-focused coding assistants Coding assistance, of course, is nothing new for generative AI. However, as more enterprises hope to integrate coding assistants into their technology stack, providers hope to tailor their offerings to them.  GitHub released an enterprise-focused Copilot called GitHub Copilot Enterprise in February, largely offering similar features. Oracle’s coding assistant focuses on Java and SQL enterprise applications. Other companies, like Harness, also released coding assistants that give real-time suggestions and target businesses. Harness’s assistant is built off Gemini.  Google’s entering the fray underscores the increasing competition in coding assistants and the need to make enterprise-specific solutions even for a task most chatbots can readily do. Moving coding assistants from separate chatbots and integrating these into developer environments or in Google’s case other channels gives flexibility to companies looking to improve productivity. The more developers can quickly test code and maybe fix bugs on local codebases, the faster companies can move and deploy applications.  source

Google’s Gemini enterprise coding assistant shows enterprise-focused coding is growing Read More »

Acknowledgments

This analysis was produced by Pew Research Center as part of the Pew-Templeton Global Religious Futures project, which analyzes religious change and its impact on societies around the world. Funding for the Global Religious Futures project comes from The Pew Charitable Trusts and the John Templeton Foundation (grant 63095). This publication does not necessarily reflect the views of the John Templeton Foundation. Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. This report is a collaborative effort based on the input and analysis of the following individuals. Find related reports online at pewresearch.org/religion. Primary Researchers Jonathan Evans, Senior ResearcherKelsey Jo Starr, Research Analyst Research Team Becka A. Alper, Senior ResearcherLaura Clancy, Research AnalystAlan Cooperman, Director, Religion ResearchManolo Corichi, Research AnalystMoira Fagan, Research AssociateJanell Fetterolf, Senior ResearcherSneha Gubbala, Research AssistantChristine Huang, Research AssociateAsta Kallo, Research AssistantKirsten Lesage, Research AssociateJordan Lippert, Research AnalystWilliam Miner, Research AnalystBesheer Mohamed, Senior ResearcherJustin Nortey, Research AnalystJacob Poushter, Associate Director, Global Attitudes ResearchAndrew Prozorovsky, Research AssistantSofia Hernandez Ramones, Research AssistantMichael Rotolo, Research AssociateLaura Silver, Associate Director, Global Attitudes Research     Maria Smerkovich, Research AssociateGregory A. Smith, Senior Associate Director, Religion ResearchPatricia Tevington, Research AssociateRichard Wike, Director, Global Attitudes Research Methods Team Dorene Asare-Marfo, Panel ManagerAnna Brown, Research MethodologistScott Keeter, Senior Survey AdvisorCourtney Kennedy, Vice President, Methods and InnovationArnold Lau, Research MethodologistCarolyn Lau, International Research MethodologistAndrew Mercer, Principal MethodologistPatrick Moynihan, Associate Director, International Research MethodsGeorgina Pizzolitto, Research MethodologistDana Popky, Associate Panel ManagerSofi Sinozich, International Research Methodologist Editorial and Graphic Design Jeff Diamant, Senior Writer/EditorRebecca Leppert, Copy EditorBill Webster, Senior Information Graphics Designer Communications and Web Publishing Achsah Callahan, Communications ManagerJustine Coleman, Associate Digital ProducerAndrew Grant, Communications AssociateAnna Schiller, Associate Director, Communications In addition, Pew Research Center is grateful for many others who provided valuable advice and assistance on this project, including Rebecca Kielty and Brianna Vetter. Former Center staffer Sarah Austin also contributed to this report. We appreciate the following individuals for advising us on strategic outreach: Eugenia Mitchelstein, associate professor of communication at Universidad de San Andrés (Argentina), and Sebastián Lacunza, columnist at elDiarioAR.com (Argentina). source

Acknowledgments Read More »

AI21 CEO says transformers not right for AI agents due to error perpetuation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As more enterprise organizations look to the so-called agentic future, one barrier may be how AI models are built. For enterprise AI developer A121, the answer is clear, the industry needs to look to other model architectures to enable more efficient AI agents.  Ari Goshen, AI21 CEO, said in an interview with VentureBeat that Transformers, the most popular model architecture, has limitations that would make a multi-agent ecosystem difficult. “One trend I’m seeing is the rise of architectures that aren’t Transformers, and these alternative architectures will be more efficient,” Goshen said. “Transformers function by creating so many tokens that can get very expensive.”  AI21, which focuses on developing enterprise AI solutions, has made the case before that Transformers should be an option for model architecture but not the default. It is developing foundation models using its JAMBA architecture, short for Joint Attention and Mamba architecture. It is based on the Mamba architecture developed by researchers from Princeton University and Carnegie Mellon University, which can offer faster inference times and longer context.  Goshen said alternative architectures, like Mamba and Jamba, can often make agentic structures more efficient and, most importantly, affordable. For him, Mamba-based models have better memory performance, which would make agents, particularly agents that connect to other models, work better.  He attributes the reason why AI agents are only now gaining popularity — and why most agents have not yet gone into product — to the reliance on LLMs built with transforms.  “The main reason agents are not in production mode yet is reliability or the lack of reliability,” Goshen said. “When you break down a transformer model, you know it’s very stochastic, so any errors will perpetuate.” Enterprise agents are growing in popularity AI agents emerged as one of the biggest trends in enterprise AI this year. Several companies launched AI agents and platforms to make it easy to build agents.  ServiceNow announced updates to its Now Assist AI platform, including a library of AI agents for customers. Salesforce has its stable of agents called Agentforce while Slack has begun allowing users to integrate agents from Salesforce, Cohere, Workday, Asana, Adobe and more.  Goshen believes that this trend will become even more popular with the right mix of models and model architectures.  “Some use cases that we see now, like question and answers from a chatbot, are basically glorified search,” he said. “I think real intelligence is in connecting and retrieving different information from sources.” Goshen added that AI21 is in the process of developing offerings around AI agents. Other architectures vying for attention Goshen strongly supports alternative architectures like Mamba and AI21’s Jamba, mainly because he believes transformer models are too expensive and unwieldy to run.  Instead of an attention mechanism that forms the backbone of transformer models, Mamba can prioritize different data and assign weights to inputs, optimize memory usage, and use a GPU’s processing power.  Mamba is growing in popularity. Other open-source and open-weight AI developers have begun releasing Mamba-based models in the past few months. Mistral released Codestral Mamba 7B in July, and in August, Falcon came out with its own Mamba-based model, Falcon Mamba 7B.   However, the transformer architecture has become the default, if not standard, choice when developing foundation models. OpenAI’s GPT is, of course, a transformer model—it’s literally in its name—but so are most other popular models.  Goshen said that, ultimately, enterprises want whichever approach is more reliable. But organizations must also be wary of flashy demos promising to solve many of their problems.  “We’re at the phase where charismatic demos are easy to do, but we’re closer to that than to the product phase,” Goshen said. “It’s okay to use enterprise AI for research, but it’s not yet at the point where enterprises can use it to inform decisions.” source

AI21 CEO says transformers not right for AI agents due to error perpetuation Read More »

Vera AI launches ‘AI Gateway’ to help companies safely scale AI without the risks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Vera AI Inc., a startup focused on responsible artificial intelligence deployment, announced today the general availability of its AI Gateway platform. The system aims to help organizations more quickly and safely implement AI technologies by providing customizable guardrails and model routing capabilities. “We’re really excited to be announcing the general availability of our model routing and guardrails platform,” said Liz O’Sullivan, CEO and co-founder of Vera, in an interview with VentureBeat. “We’ve been hard at work over the last year building something that could scalably and repeatably accelerate time to production for the kinds of business use cases that actually stand to generate a lot of excitement.” Vera AI’s policy configuration interface, showcasing the platform’s granular content moderation tools. The dashboard allows companies to customize AI safeguards, balancing the need for innovation with responsible content management — a key selling point in Vera’s mission to make AI deployment both efficient and ethical. (Credit: Vera) Bridging the gap: How Vera’s AI gateway tackles last-mile challenges The launch comes at a time when many companies are eager to adopt generative AI and other advanced AI technologies, but remain hesitant due to potential risks and challenges in implementing safeguards. Vera’s platform sits between users and AI models, enforcing policies and optimizing costs across different types of AI requests. “Businesses are only ever interested in doing one of three things, whether that’s make more money, save more money, or reducing risk,” O’Sullivan explained. “We’ve focused ourselves squarely on the last mile problems, which people think, just like regular software engineering, that it’s going to be quick and easy, that these are just afterthoughts that you can apply to optimize costs or to reduce risks associated with things like disinformation and broad and CSAM, but they’re actually quite hard.” Justin Norman, CTO and co-founder of Vera, emphasized the importance of nuance in AI policy implementation: “You want to be able to set the bar for where your system will respond and where it will not respond and what it will do, without having to rely upon what some other companies made a decision for you on.” Vera AI’s interface demonstrates its content moderation capabilities, blocking a user’s input that failed to follow the specified rules — a key feature in the company’s mission to provide guardrails for responsible AI deployment. (Credit: Vera) From AI safety activism to startup success: The minds behind Vera The company’s approach appears to be gaining traction. According to O’Sullivan, Vera is already “processing tens of thousands of model requests per month across a handful of paying customers.” The startup offers API-based pricing at one cent per call, aligning its incentives with customer success in AI deployment. Additionally, Vera has introduced a 30-day free trial, which can be accessed using the code “FRIENDS30,” allowing potential customers to experience the platform’s capabilities firsthand. Vera’s launch is particularly noteworthy given the founders’ backgrounds. O’Sullivan, who serves on the National AI Advisory Committee, has a history of AI safety activism, including her work at Clarifai. Norman brings experience from government, academia, and industry, including PhD work at UC Berkeley focused on AI robustness and evaluation. Navigating the AI safety landscape: Vera’s role in responsible innovation As AI adoption accelerates across industries, platforms like Vera’s could play a crucial role in addressing safety and ethical concerns while enabling innovation. The startup’s focus on customizable guardrails and efficient model routing positions it well to serve both enterprise clients managing internal AI use and companies developing consumer-facing AI applications. However, Vera faces a competitive landscape with other AI safety and deployment startups also vying for market share. The company’s success will likely depend on its ability to demonstrate clear value to customers and stay ahead of rapidly evolving AI technologies and associated risks. For organizations looking to responsibly implement AI, Vera’s launch offers a new option to consider. As O’Sullivan put it, “We’re here to make it as easy as possible to enjoy the benefits of AI while reducing the risks that things do go wrong.” source

Vera AI launches ‘AI Gateway’ to help companies safely scale AI without the risks Read More »

Many Catholics in the U.S. and Latin America Want the Church to Allow Birth Control and to Let Women Become Priests

Most view Pope Francis favorably, though his ratings have dropped Pope Francis waves to the crowd as he arrives in the popemobile to celebrate an open-air Mass in Villavicencio, Colombia, on Sept. 8, 2017. (Alberto Pizzoli/AFP via Getty Images) This Pew Research Center analysis explores views on the Catholic Church and Pope Francis among Catholics in Latin America and the United States. All seven countries in the survey have Catholic populations that rank among the world’s 25 largest – notably including Brazil (largest), Mexico (second-largest) and the U.S. (fourth-largest) – according to the Vatican’s 2021 Statistical Yearbook of the Church. And the six Latin American countries surveyed account for roughly three-quarters of the region’s Catholics. For non-U.S. data, this analysis draws on nationally representative surveys of 6,234 adults – including 3,655 Catholics – conducted from Jan. 22 to April 27, 2024. Surveys were conducted face-to-face in Argentina, Brazil, Chile, Colombia, Mexico and Peru. In the U.S., we surveyed 12,693 respondents from Feb. 13 to 25, 2024, including 2,021 Catholics. Most of the survey’s respondents (10,642) – including all of the survey’s Catholic respondents – are members of the American Trends Panel (ATP), an online survey panel recruited through national random sampling of residential addresses, which gives nearly all U.S. adults a chance of selection. Read more about the ATP’s methodology. The remaining respondents (2,051) are members of three other panels: the Ipsos KnowledgePanel, the NORC Amerispeak Panel and the SSRS Opinion Panel. All three are national survey panels recruited through random sampling (not “opt-in” polls). We used these additional panels to ensure that the survey would have enough Jewish and Muslim respondents to be able to report on their views. (While Jewish and Muslim respondents are not discussed in this particular report that focuses on Catholic topics, they are discussed in other reports based on this survey.) The U.S. data is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education, religious affiliation and other categories. Here are the questions used for the report, along with responses, and the survey methodology. This analysis was produced by Pew Research Center as part of the Pew-Templeton Global Religious Futures project, which analyzes religious change and its impact on societies around the world. Funding for the Global Religious Futures project comes from The Pew Charitable Trusts and the John Templeton Foundation (grant 63095). This publication does not necessarily reflect the views of the John Templeton Foundation. A new survey by Pew Research Center asked Catholics in six Latin American countries and the United States how they think the church should handle a variety of matters related to contraception, the priesthood and sexuality. Among the main findings: Most Catholics in all seven countries want the church to allow Catholics to use birth control. The shares saying this range from 86% in Argentina to 63% in Brazil. In most of the countries surveyed, majorities of Catholics also say the church should allow women to become priests. Opinion is more divided on whether the church should allow priests to get married. Roughly two-thirds of Catholics in Argentina, Chile and the U.S. are in favor, but majorities in Mexico and Peru say the church should not allow priests to marry. Views on whether the church should recognize the marriages of gay and lesbian couples vary among Catholics in the countries surveyed. Majorities of Catholics in Argentina and Chile say the church should recognize the marriages of gay and lesbian couples, and just over half of U.S. Catholics agree. In the other four countries, fewer than half take this stance. The survey also finds that Pope Francis, the first Latin American pope, remains broadly popular among Catholics across the region – though his favorability ratings are lower now than they were a decade ago, shortly after his papacy began in March 2013. The decrease in favorability has been sharpest among Catholics in Argentina, his country of birth. Ten years ago, nearly all Catholics surveyed there (98%) expressed a favorable opinion of Francis, compared with 74% today. And in the U.S., where a February 2014 survey found that 85% of Catholics viewed the pope favorably, 75% now take that view. Most Catholics surveyed also say Francis represents a change in the Catholic Church’s direction, with more of them calling it a major change than a minor one. These are among the key findings of a survey of 5,676 Catholics, conducted in English, Spanish and Portuguese from January through April 2024 in seven countries: Argentina, Brazil, Chile, Colombia, Mexico, Peru and the U.S. The rest of this report explores these findings in more detail. More than a dozen of our surveys have measured U.S. Catholics’ favorability toward Pope Francis since the start of his papacy. Find this more detailed U.S. trend in our recent report, “Majority of U.S. Catholics Express Favorable View of Pope Francis.” How we worded these questions We used simple, common phrases in the survey questions about some steps that Catholics would – or would not – like to see the church take. Our goal was to make the questions easy to understand for as many respondents as possible. In some cases, the wording of the questions involved a trade-off between broad understandability and theological nuance. For example, one question asks whether the church should “allow priests to get married.” This would not, strictly speaking, be a change in doctrine. The Catholic Church already allows married priests under certain circumstances, such as if a man was married before being ordained in an Eastern Catholic Church. Technically, the church considers the rule of celibacy for priests to be a “discipline” rather than a doctrine. Nonetheless, allowing parish priests to get married and continue in their duties would represent a big change in the everyday life of the church in the United States and Latin America.  Similarly, another question asks whether the church should allow unmarried Catholics who “are living with a romantic partner” to receive Communion. Actually, Catholicism has no rule against unmarried people living together. The church’s teaching

Many Catholics in the U.S. and Latin America Want the Church to Allow Birth Control and to Let Women Become Priests Read More »

4. Accuracy of election news

Most U.S. adults (73%) say they see inaccurate election news at least somewhat often, including 37% who say they see this extremely or very often. Only 3% of Americans say they don’t see inaccurate news about the election at all. By party Republicans and independents who lean toward the Republican Party are about twice as likely as Democrats and Democratic leaners to say they come across inaccurate election news extremely or very often (51% vs. 24%). Meanwhile, about a third of Democrats (36%) say they see inaccurate election news not too often or not at all, while just 14% of Republicans say the same. Conservative Republicans are more likely than Republicans who describe themselves as moderate or liberal to report seeing inaccurate news coverage about the election extremely or very often (60% vs. 37%). Hearing inaccurate election news in conversation News coverage is not the only place where Americans are seeing or hearing information about the presidential election that they consider inaccurate. About six-in-ten U.S. adults (58%) say they hear people share inaccurate information about the election in conversation at least somewhat often, including 27% who hear this extremely or very often. There are not substantial differences between the two major political parties on this question. Accuracy of news from primary sources Just 10% of U.S. adults report seeing inaccurate news coverage from their most-used sources extremely or very often, and 25% say they see this somewhat often. A majority (63%) say they have not seen inaccurate news coverage of the election often or at all from their most commonly used sources. By party Republicans are more likely than Democrats to say they see inaccurate election news from the sources they turn to most often. Still, fewer than half of Republicans (42%) say they see this at least somewhat often, including just 14% who say they extremely or very often see inaccurate election coverage from their primary sources. Determining what is true and what’s not Americans are split over how easy it is to discern what’s true about the presidential campaign. Around half (52%) say they generally find it difficult to determine whether election news is true or not, slightly more than the share who find it easy to determine (47%). These numbers are similar to the last time we asked this question in October 2020, when 55% of U.S. adults said it was difficult to distinguish truth from fiction. By party and ideology Just as they are more likely to report seeing inaccurate information about the election, Republicans also are more likely to say they find it tough to know what is true. Most Republicans (61%) say it is difficult to determine what is true and what is not, compared with 42% of Democrats who express this view. A majority of Democrats (58%) say they find it easy to distinguish truth from fiction when it comes to election news. Views also vary within each party by ideology: Moderate or liberal Republicans are more likely than conservative Republicans to say it’s difficult to determine whether election-related information is true or not. Among Democrats, liberals are especially likely to find it easy to sort out truth from fiction. source

4. Accuracy of election news Read More »

Methodology

The American Trends Panel survey methodology Overview Data in this report comes from Wave 155 of the American Trends Panel (ATP), Pew Research Center’s nationally representative panel of randomly selected U.S. adults. The survey was conducted from Sept. 16 to 22, 2024. A total of 9,680 panelists responded out of 10,627 who were sampled, for a survey-level response rate of 91%. The cumulative response rate accounting for nonresponse to the recruitment surveys and attrition is 3%. The break-off rate among panelists who logged on to the survey and completed at least one item is 1%. The margin of sampling error for the full sample of 9,680 respondents is plus or minus 1.3 percentage points. SSRS conducted the survey for Pew Research Center via online (n=9,391) and live telephone (n=289) interviewing. Interviews were conducted in both English and Spanish. To learn more about the ATP, read “About the American Trends Panel.” Panel recruitment Since 2018, the ATP has used address-based sampling (ABS) for recruitment. A study cover letter and a pre-incentive are mailed to a stratified, random sample of households selected from the U.S. Postal Service’s Computerized Delivery Sequence File. This Postal Service file has been estimated to cover 90% to 98% of the population. Within each sampled household, the adult with the next birthday is selected to participate. Other details of the ABS recruitment protocol have changed over time but are available upon request. Prior to 2018, the ATP was recruited using landline and cellphone random-digit-dial surveys administered in English and Spanish. A national sample of U.S. adults has been recruited to the ATP approximately once per year since 2014. In some years, the recruitment has included additional efforts (known as an “oversample”) to improve the accuracy of data for underrepresented groups. For example, Hispanic adults, Black adults and Asian adults were oversampled in 2019, 2022 and 2023, respectively. Sample design The overall target population for this survey was noninstitutionalized persons ages 18 and older living in the United States. All active panel members were invited to participate in this wave. Questionnaire development and testing The questionnaire was developed by Pew Research Center in consultation with SSRS. The web program used for online respondents was rigorously tested on both PC and mobile devices by the SSRS project team and Pew Research Center researchers. The SSRS project team also populated test data that was analyzed in SPSS to ensure the logic and randomizations were working as intended before launching the survey. Incentives All respondents were offered a post-paid incentive for their participation. Respondents could choose to receive the post-paid incentive in the form of a check or gift code to Amazon.com, Target.com or Walmart.com. Incentive amounts ranged from $5 to $15 depending on whether the respondent belongs to a part of the population that is harder or easier to reach. Differential incentive amounts were designed to increase panel survey participation among groups that traditionally have low survey response propensities. Data collection protocol The data collection field period for this survey was Sept. 16-22, 2024. Surveys were conducted via self-administered web survey or by live telephone interviewing.  For panelists who take surveys online: Postcard notifications were mailed to a subset on Sept. 16. Survey invitations were sent out in two separate launches: soft launch and full launch. Sixty panelists were included in the soft launch, which began with an initial invitation sent on Sept. 16. All remaining English- and Spanish-speaking sampled online panelists were included in the full launch and were sent an invitation on Sept. 17. Panelists participating online were sent an email invitation and up to two email reminders if they did not respond to the survey. ATP panelists who consented to SMS messages were sent an SMS invitation with a link to the survey and up to two SMS reminders. For panelists who take surveys over the phone with a live interviewer: Prenotification postcards were mailed on Sept. 11, and reminder postcards were mailed on Sept. 16. Soft launch took place on Sept. 16 and involved dialing until a total of four interviews had been completed. All remaining English- and Spanish-speaking sampled phone panelists’ numbers were dialed throughout the remaining field period. Panelists who take surveys via phone can receive up to six calls from trained SSRS interviewers. Data quality checks To ensure high-quality data, Center researchers performed data quality checks to identify any respondents showing patterns of satisficing. This includes checking for whether respondents left questions blank at very high rates or always selected the first or last answer presented. As a result of this checking, eight ATP respondents were removed from the survey dataset prior to weighting and analysis. Weighting The ATP data is weighted in a process that accounts for multiple stages of sampling and nonresponse that occur at different points in the panel survey process. First, each panelist begins with a base weight that reflects their probability of recruitment into the panel. These weights are then calibrated to align with the population benchmarks in the accompanying table to correct for nonresponse to recruitment surveys and panel attrition. If only a subsample of panelists was invited to participate in the wave, this weight is adjusted to account for any differential probabilities of selection. Among the panelists who completed the survey, this weight is then calibrated again to align with the population benchmarks identified in the accompanying table and trimmed at the 1st and 99th percentiles to reduce the loss in precision stemming from variance in the weights. Sampling errors and tests of statistical significance take into account the effect of weighting. The following table shows the unweighted sample sizes and the error attributable to sampling that would be expected at the 95% level of confidence for different groups in the survey. Sample sizes and sampling errors for other subgroups are available upon request. In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls. Dispositions and

Methodology Read More »