Why ‘prosocial AI’ must be the framework for designing, deploying and governing AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As AI pervades every sphere of modern life, the central challenge facing business leaders, policymakers and innovators is no longer whether to adopt intelligent systems but how. In a world marked by escalating polarization, resource depletion, eroding trust in institutions and volatile information landscapes, the critical imperative is to engineer AI so that it contributes meaningfully and sustainably to human and planetary well-being. Prosocial AI — a framework of design, deployment and governance principles that ensure AI is thoughtfully tailored, trained, tested and targeted to uplift people and the planet — is more than a moral stance or PR veneer. It is a strategic approach to positioning AI within a broader ecology of intelligence that values collective flourishing over narrow optimization. The ABCD of AI’s potential: From gloom to glory  The rationale for prosocial AI emerges from four intertwined realms — agency, bonding, climate and division (ABCD). Each domain highlights the dual character of AI: It can either intensify existing dysfunctions or act as a catalyst for regenerative, inclusive solutions. Agency: Too often, AI-driven platforms rely on addictive loops and opaque recommender systems that erode user autonomy. Prosocial AI, by contrast, can activate agency by revealing the provenance of its suggestions, offering meaningful user controls and respecting the multifaceted nature of human decision-making. It is not merely about “consent” or “transparency” as abstract buzzwords; it is about designing AI interactions that acknowledge human complexity — the interplay of cognition, emotion, bodily experience and social context — and enabling individuals to navigate their digital environments without succumbing to manipulation or distraction. Bonding: Digital technologies can either fracture societies into echo chambers or serve as bridges that connect diverse people and ideas. Prosocial AI applies nuanced linguistic and cultural models to identify shared interests, highlight constructive contributions and foster empathy across boundaries. Instead of fueling outrage for attention, it helps participants discover complementary perspectives, strengthening communal bonds and reinforcing the delicate social fabrics that hold societies together. Climate: AI’s relationship with the environment is fraught with tension. AI can optimize supply chains, enhance climate modeling and support environmental stewardship. However, the computational intensity of training large models often entails a considerable carbon footprint. A prosocial lens demands designs that balance these gains against ecological costs — adopting energy-efficient architectures, transparent lifecycle assessments and ecologically sensitive data practices. Rather than treat the planet as an afterthought, prosocial AI anchors climate considerations as a cardinal priority: AI must not only advise on sustainability but must be sustainable. Division: The misinformation cascades and ideological rifts that define our era are not an inevitable byproduct of technology, but a result of design choices that privilege virality over veracity. Prosocial AI counters this by embedding cultural and historical literacy into its processes, respecting contextual differences and providing fact-checking mechanisms that enhance trust. Rather than homogenizing knowledge or imposing top-down narratives, it nurtures informed pluralism, making digital spaces more navigable, credible and inclusive. Double literacy: Integrating AI and NI  Realizing this vision depends on cultivating what we might call “double literacy.” On one side is AI literacy: mastering the technical intricacies of algorithms, understanding how biases emerge from data and establishing rigorous accountability and oversight mechanisms. On the other side is natural intelligence (NI) literacy: A comprehensive, embodied understanding of human cognition and emotion (brain and body), personal identity (self) and cultural embeddedness (society). This NI literacy is not a soft skill set perched on the margins of innovation; it is fundamental. Human intelligence is shaped by neurobiology, physiology, interoception, cultural narratives and community ethics — an intricate tapestry that transcends reductive notions of “rational actors.” By bringing NI literacy into dialogue with AI literacy, developers, decision-makers and regulators can ensure that digital architectures honor our multidimensional human reality. This holistic approach fosters systems that are ethically sound, context-sensitive and capable of complementing rather than constraining human capacities. AI and NI in synergy: Prosocial AI goes beyond zero-sum thinking The popular imagination often pits machines against humans in a zero-sum contest. Prosocial AI challenges this dichotomy. Consider the beauty of complementarity in healthcare: AI excels at pattern recognition, sifting through vast troves of medical images to detect anomalies that might elude human specialists. Physicians, in turn, draw on their embodied cognition and moral instincts to interpret results, communicate complex information and consider each patient’s broader life context. The outcome is not simply more efficient diagnostics; it is more humane, patient-centered care. Similar paradigms can transform law, finance, governance and education decision-making. By integrating the precision of AI with the nuanced judgment of human experts, we might transition from hierarchical command-and-control models to collaborative intelligence ecosystems. Here, machines handle complexity at scale and humans provide the moral vision and cultural fluency necessary to ensure that these systems serve authentic public interests. Building a prosocial infrastructure To embed prosocial AI at the core of our future, we need a concerted effort across all sectors: Industry and tech companies: Innovators can prioritize “human-in-the-loop” designs and explicitly reward metrics tied to well-being rather than engagement at any cost. Instead of designing AI to hook users, they can build systems that inform, empower and uplift — measured by improvements in health outcomes, educational attainment, environmental sustainability or social cohesion. Example: The Partnership on AI provides frameworks for prosocial innovation, helping guide developers toward responsible practices. Civil society and NGOs: Community groups and advocacy organizations can guide the development and deployment of AI, testing new tools in real-world contexts. They can bring ethnically, linguistically and culturally diverse perspectives to the design table, ensuring that the resulting AI systems serve a broad range of human experiences and needs. Educational Institutions: Schools and universities should integrate double literacy into their curricula while reinforcing critical thinking, ethics and cultural studies. By nurturing AI and NI literacy, educational bodies can help ensure that future generations are skilled in machine learning (ML) and deeply grounded in human values. Example:

Why ‘prosocial AI’ must be the framework for designing, deploying and governing AI Read More »

Forrester’s 2025 Customer Obsession Awards: Share Your Story Of Creating Your Customers’ Best Journey

Welcome to the 2025 Customer Obsession Awards! Every year, Forrester recognizes leading organizations and executives who put customers at the center of everything — their leadership, strategy, and operations — and, in the process, accelerate growth, customer loyalty, and employee engagement. This year, we are recognizing those who understand that customer engagement knows no bounds. They use visionary strategies that seamlessly blend marketing, customer experience (CX), and digital business models into a meticulously crafted journey for their customers that delivers results. We know that customer obsession isn’t easy. That’s why we want to celebrate companies’ and leaders’ hard work and dedication around the world. Regional winners will be announced at CX Summit EMEA (June 2–4, 2025), CX Summit North America (June 23–26, 2025), and CX Summit APAC (August 18, 2025). Nominate Your Company And Leaders For Forrester’s 2025 Customer Obsession Awards We’re seeking nominations in two categories: Customer-Obsessed Enterprise. This award celebrates an organization with sharp and sustained customer focus in their product, service, and brand decisions, strategies, and operations. The winning organization encourages deep collaboration across their entire enterprise to ensure that customer obsession aligns with the brand promise and results in quantifiably better outcomes for customers, employees, and the business. Nominations are open to all organizations in North America, Asia Pacific, and EMEA with at least 1,000 employees, focusing on the consumer-facing part of the business. Customer-Obsessed Leadership. This award will recognize a senior executive who puts the customer at the center of every decision and models behaviors that balance both customer and business needs. The winning executive creates an environment where everyone is empowered and inspired to create quantifiably better outcomes for customers, employees, and the business with amazing products, experiences, and service. Nominations for the Customer-Obsessed Leadership Award are open to senior leaders in organizations across industries that are headquartered in North America and have 1,000 or more employees, focusing their submission on the consumer-facing part of the business. Senior executives in CX, marketing, and digital business roles are invited to apply. Submit Nominations Here Get more information on eligibility, fill out the nomination form, and learn more about the awards process here: The Forrester team looks forward to your nominations and meeting the winners at our events this year in Nashville, London, and Sydney. Award winners and finalists will be notified ahead of each event. Learn more about Forrester’s Customer Obsession Awards program and previous award winners here. Register to attend Forrester’s 2025 North America, EMEA, and APAC CX Summits. source

Forrester’s 2025 Customer Obsession Awards: Share Your Story Of Creating Your Customers’ Best Journey Read More »

Coding Boot Camp Seeks Coverage For Tuition Financing Row

By Hope Patti ( January 24, 2025, 5:01 PM EST) — A San Francisco-based company that runs coding boot camps said its insurers must defend and indemnify it for federal and state probes and private settlements related to its tuition financing program, telling a California federal court that coverage denials have left the company on the brink of insolvency…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Coding Boot Camp Seeks Coverage For Tuition Financing Row Read More »

What Nearshoring Growth In Americas Means For Patents

By Ernest Huang ( January 23, 2025, 7:07 PM EST) — Trade policies and treaties by the United States in recent years have led to more nearshoring to the Americas, a trend with significant patent implications…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

What Nearshoring Growth In Americas Means For Patents Read More »

Swedish startup to build pilot plant for wood-based material that purifies the air

Swedish startup Adsorbi has secured €1mn to ramp up production of a cellulose-based material that sucks up pollutants from the air.  Metsä Spring, the venture arm of the Finnish forestry giant, led the funding round alongside Chalmers Ventures and Jovitech Invest. “We are planning to launch the pilot plant in June and we will be equipped to meet our customer demands while maintaining consistent quality,” Hanna Johansson, CEO of Adsorbi, told TNW via email. The facility will have an expected capacity of 100 tonnes per year. Johansson co-founded Adsorbi in 2022 alongside Christian Löfvendahl, Romain Bordes, and Kinga Grenda. The team spun out the company from materials research at the Chalmers University of Technology in Gothenburg.   The next big thing? It might be you… TNW Conference is here to support startups & scaleups to become the next big thing. Be part of the journey. Bordes and Grenda, the chief researchers, originally wanted to develop new ways to protect works of art from harmful pollution. But in the process, they discovered a way to turn cellulose from Sweden’s abundant forests into an air purification material with wide-ranging applications. Adsorbi’s material can be used wherever gaseous air pollutants are a problem, from air filters to products that remove odours. Continuing the team’s initial objective, the startup also works with museums to protect artefacts and artworks. The substance — which looks like little, white pieces of sponge — promises a better, greener alternative to activated carbon, the current market standard.  Adsorbi claims its product lasts longer, doesn’t release any hazardous organic compounds back into the air, and is water and fire-resistant. Plus, the material has half the carbon footprint of activated carbon, the startup said. Handily, the substance also changes colour to indicate when it needs to be replaced. Adsorbi’s material can be used in air filters, products that remove odours, and in museums to protect works of art. Credit: Adsorbi “Air pollutant control is needed in many markets, and we’re ready to offer a commercial solution that ensures the air we breathe is clean without extensive use of fossil-based materials,” said Johansson.  Air pollution is something we usually associate with the outdoors. However, indoor air can be two to five times more polluted than outdoor air, according to the American Lung Association.  Adsorbi’s patented material is designed to capture nitrogen oxides like nitric oxide (NO) and nitrogen dioxide (NO2) — major contributors to air pollution — as well as acids and aldehydes. The latter is commonly found in cosmetics, perfumes, cleaning products, odourant dispensers, and grooming aids.   Last September, Adsorbi launched eco-friendly shoe deodoriser inserts in partnership with footwear giant Icebug and odour reducer Smellwell. The company said it is also working with multinational air filtration companies on several other products, including air fresheners and sustainable art conservation products.   source

Swedish startup to build pilot plant for wood-based material that purifies the air Read More »

FTC Signals Unified Focus On Kids' Privacy With Rule Update

By Allison Grande ( January 24, 2025, 10:26 PM EST) — The Federal Trade Commission’s recent unanimous move to strengthen longstanding online privacy protections for children demonstrated that the agency won’t be easing up on enforcement in this space as a new Republican regime takes over, despite lingering questions over whether further changes or expansions may be on the horizon. … Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FTC Signals Unified Focus On Kids' Privacy With Rule Update Read More »

OpenAI: Extending model ‘thinking time’ helps combat emerging cyber vulnerabilities

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Typically, developers focus on reducing inference time — the period between when AI receives a prompt and provides an answer — to get at faster insights.  But when it comes to adversarial robustness, OpenAI researchers say: Not so fast. They propose that increasing the amount of time a model has to “think” — inference time compute — can help build up defenses against adversarial attacks.  The company used its own o1-preview and o1-mini models to test this theory, launching a variety of static and adaptive attack methods — image-based manipulations, intentionally providing incorrect answers to math problems, and overwhelming models with information (“many-shot jailbreaking”). They then measured the probability of attack success based on the amount of computation the model used at inference.  “We see that in many cases, this probability decays — often to near zero — as the inference-time compute grows,” the researchers write in a blog post. “Our claim is not that these particular models are unbreakable — we know they are — but that scaling inference-time compute yields improved robustness for a variety of settings and attacks.” From simple Q/A to complex math Large language models (LLMs) are becoming ever more sophisticated and autonomous — in some cases essentially taking over computers for humans to browse the web, execute code, make appointments and perform other tasks autonomously — and as they do, their attack surface becomes wider and every more exposed.  Yet adversarial robustness continues to be a stubborn problem, with progress in solving it still limited, the OpenAI researchers point out — even as it is increasingly critical as models take on more actions with real-world impacts.  “Ensuring that agentic models function reliably when browsing the web, sending emails or uploading code to repositories can be seen as analogous to ensuring that self-driving cars drive without accidents,” they write in a new research paper. “As in the case of self-driving cars, an agent forwarding a wrong email or creating security vulnerabilities may well have far-reaching real-world consequences.”  To test the robustness of o1-mini and o1-preview, researchers tried a number of strategies. First, they examined the models’ ability to solve both simple math problems (basic addition and multiplication) and more complex ones from the MATH dataset (which features 12,500 questions from mathematics competitions).  They then set “goals” for the adversary: getting the model to output 42 instead of the correct answer; to output the correct answer plus one; or output the correct answer times seven. Using a neural network to grade, researchers found that increased “thinking” time allowed the models to calculate correct answers.  They also adapted the SimpleQA factuality benchmark, a dataset of questions intended to be difficult for models to resolve without browsing. Researchers injected adversarial prompts into web pages that the AI browsed and found that, with higher compute times, they could detect inconsistencies and improve factual accuracy.  Source: Arxiv Ambiguous nuances In another method, researchers used adversarial images to confuse models; again, more “thinking” time improved recognition and reduced error. Finally, they tried a series of “misuse prompts” from the StrongREJECT benchmark, designed so that victim models must answer with specific, harmful information. This helped test the models’ adherence to content policy. However, while increased inference time did improve resistance, some prompts were able to circumvent defenses. Here, the researchers call out the differences between “ambiguous” and “unambiguous” tasks. Math, for instance, is undoubtedly unambiguous — for every problem x, there is a corresponding ground truth. However, for more ambiguous tasks like misuse prompts, “even human evaluators often struggle to agree on whether the output is harmful and/or violates the content policies that the model is supposed to follow,” they point out.  For example, if an abusive prompt seeks advice on how to plagiarize without detection, it’s unclear whether an output merely providing general information about methods of plagiarism is actually sufficiently detailed enough to support harmful actions.  Source: Arxiv “In the case of ambiguous tasks, there are settings where the attacker successfully finds ‘loopholes,’ and its success rate does not decay with the amount of inference-time compute,” the researchers concede.  Defending against jailbreaking, red-teaming In performing these tests, the OpenAI researchers explored a variety of attack methods.  One is many-shot jailbreaking, or exploiting a model’s disposition to follow few-shot examples. Adversaries “stuff” the context with a large number of examples, each demonstrating an instance of a successful attack. Models with higher compute times were able to detect and mitigate these more frequently and successfully.  Soft tokens, meanwhile, allow adversaries to directly manipulate embedding vectors. While increasing inference time helped here, the researchers point out that there is a need for better mechanisms to defend against sophisticated vector-based attacks. The researchers also performed human red-teaming attacks, with 40 expert testers looking for prompts to elicit policy violations. The red-teamers executed attacks in five levels of inference time compute, specifically targeting erotic and extremist content, illicit behavior and self-harm. To help ensure unbiased results, they did blind and randomized testing and also rotated trainers. In a more novel method, the researchers performed a language-model program (LMP) adaptive attack, which emulates the behavior of human red-teamers who heavily rely on iterative trial and error. In a looping process, attackers received feedback on previous failures, then used this information for subsequent attempts and prompt rephrasing. This continued until they finally achieved a successful attack or performed 25 iterations without any attack at all.  “Our setup allows the attacker to adapt its strategy over the course of multiple attempts, based on descriptions of the defender’s behavior in response to each attack,” the researchers write.  Exploiting inference time In the course of their research, OpenAI found that attackers are also actively exploiting inference time. One of these methods they dubbed “think less” — adversaries essentially tell models to reduce compute, thus increasing their susceptibility to error.  Similarly, they identified a failure mode in reasoning models that they

OpenAI: Extending model ‘thinking time’ helps combat emerging cyber vulnerabilities Read More »

3 Noteworthy Effects Of The 2025 NDAA

By Adam Bartolanzo and Kathryn Carlson ( January 21, 2025, 5:08 PM EST) — The Servicemember Quality of Life Improvement and National Defense Authorization Act for Fiscal Year 2025, the annual defense policy and budget bill signed into law last month, contains a discretionary topline budget of $895.2 billion, to be split between the U.S. Department of Defense, U.S. Department of Energy and other federal agencies for national defense related spending.[1]… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

3 Noteworthy Effects Of The 2025 NDAA Read More »

FTC Report On AI Sector Illuminates Future Enforcement

By Martin Mackowski, Michael Wise and Francesco Liberatore ( January 24, 2025, 5:59 PM EST) — On Jan. 17, the Federal Trade Commission revealed the much-anticipated findings of its ongoing inquiry into the artificial intelligence sector, publishing its staff report on AI partnerships and investments. While the report was pushed out at the end of the Biden administration, it and the accompanying statements from members of the incoming FTC majority shed light on the direction of future antitrust enforcement in the AI segment…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FTC Report On AI Sector Illuminates Future Enforcement Read More »