What you need to know about Manus, the new AI agentic system from China hailed as a second ‘DeepSeek moment’

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Stop me if you’ve heard this one before: A little-known Chinese startup is making waves globally for an impressive new AI product. No, we’re not talking about DeepSeek-R1, the AI reasoning model that made waves among western AI circles earlier this year. Instead, the hot new product du jour is Manus, a new AI multipurpose agent — that is, more than an AI model, it’s an interface for controlling multiple models that can autonomously complete complicated tasks like generating reports or running dozens of social media accounts on the user’s behalf. If it sounds similar to the Deep Research modes offered by OpenAI, Google and others, as well as OpenAI’s Operator agent and Anthropic’s Computer Use mode, (the latter two of which can, like Manus, take control of a user’s computer or programs on it, move cursors and type to perform actions within software), then congrats — you’ve understood what it aims to offer. But what do action-oriented leaders and decision makers within enterprises here in the west and abroad — such as CTO, product managers, IT team leaders and more — need to know about Manus and the capabilities it offers? Read on to find out. What is Manus and who’s behind it? Manus AI was officially announced on March 5 on social network X, with a post from its builder Butterfly Effect describing it as “the first general AI agent” that autonomously executes complex tasks rather than just generating ideas. According to South China Morning Post (SCMP), Butterfly Effect has offices in Beijing and Wuhan. The company reportedly has only a few dozen employees, but has rapidly gained attention in China’s AI landscape. The founding team includes entrepreneurs and experienced product managers, led by Xiao Hong, a 33-year-old serial entrepreneur and 2015 graduate of Wuhan’s Huazhong University of Science and Technology. Manus team. Credit: Optics Valley of China/Facebook Xiao previously built WeChat-based applications that were acquired by larger companies and later launched Monica.ai, an AI assistant available as a browser extension and mobile app. On its website, Manus explains that its name comes from the Latin word for “hand,” a nod to the fact that users can rely on it to perform actions for them, or, in my words, to “lend them a hand.” How does Manus AI work? Manus AI is designed as a multi-agent system, meaning it combines several AI models to handle tasks independently. Unlike AI chatbots that assist users by providing information, Manus can research, analyze data, generate reports, automate workflows and even write and deploy code. According to X posts by Ji Yichao, co-founder and chief scientist of Manus AI, the system is built on Anthropic’s Claude 3.5 Sonnet — a nine-month-old AI model — and fine-tuned versions of Alibaba’s Qwen models. The team is currently testing upgrading Manus to Anthropic’s newest and most performant model, Claude 3.7, which is expected to further enhance its reasoning and execution capabilities. Manus AI operates asynchronously, meaning users can assign tasks and walk away while it completes them autonomously. It is currently in private beta, with access granted through invitation codes. How does Manus AI stack up against U.S.-based competition? One of the biggest reasons Manus AI has gained traction is its strong benchmark performance: It beat U.S. firm OpenAI’s own o3-powered Deep Research agent and the “previous state-of-the-art,” according to a graph posted on the official Manus website. This claim, along with real-world tests, has led some AI power users and early adopters to the conclusion that Manus may be one of the most capable autonomous AI agents available today. Beyond benchmarks, Manus has already proven itself on freelance platforms like Upwork and Fiverr and in Kaggle machine learning (ML) challenges, successfully executing complex real-world tasks. AI influencers celebrate Manus’s arrival and impressive performance Conversation about Manus in media and AI circles took off late last week when users on X noted that some people were using it to automate the management of up to 50 social accounts at one time, in realtime, showing off its ability to create fleets of engagement that businesses could use for reviews. In addition, although this hasn’t yet been proven for Manus, the same technology could presumably be used for all kinds of marketing and influence campaigns, even political propaganda or disinformation. But for the most part, AI power users and influencers in the west were largely impressed and celebrated Manus’s arrival — saying they were awed by initial tests once they received scarce beta invites, or observed the work of others with access to the tool. Rowan Cheung, founder of The Rundown AI newsletter, described Manus AI’s launch as a potential turning point for AI agents and said “China’s second DeepSeek moment is here” in a post on his LinkedIn account. “This AI agent called ‘Manus’ is going crazy viral in China right now… It’s like Deep Research + Operator + Claude Computer combined, and it’s REALLY good.” Cheung personally tested Manus and found that it: Created and deployed a biography website about himself, with 100% accuracy and real-time data retrieval. Found top rental spots in San Francisco based on crime rates, AI industry presence and entrepreneurship density. Developed a full AI course, generating eight chapters of content, including tools, use cases and prompts. He received 500 invite codes from the Manus team and has been doling them out to his subscribers and readers. Former Googler and AI-focused YouTuber Bilawal Sidhu shared a hands-on video review, calling Manus “the closest thing I have seen to an autonomous AI agent.” “It’s like you’re standing over the shoulder of somebody using a computer… asking them what to do at the highest level, and it basically does it for you.” Sidhu tested Manus on various tasks, including: Researching locations: Scanning Google Maps and news sources to recommend the best places based on regulations, accessibility and safety. Developing video applications: Automating video

What you need to know about Manus, the new AI agentic system from China hailed as a second ‘DeepSeek moment’ Read More »

How Amended Rule 702 Affects Testimony In Patent Litigation

By Janice Ta, Helena Burns and Emmanuel Azih ( March 17, 2025, 6:16 PM EDT) — In 2023, Federal Rule of Evidence 702, which governs the admissibility of expert testimony, had its most significant amendment in 25 years. The 2023 amendments updated the 2000 amendments in two ways to address the apparent failure of some courts to serve as proper gatekeepers in preventing unreliable expert evidence from reaching a jury:… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

How Amended Rule 702 Affects Testimony In Patent Litigation Read More »

Alibaba’s new open source model QwQ-32B matches DeepSeek-R1 with way smaller compute requirements

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qwen Team — a division of Chinese e-commerce giant Alibaba developing its growing family of open-source Qwen large language models (LLMs) — has introduced QwQ-32B, a new 32-billion-parameter reasoning model designed to improve performance on complex problem-solving tasks through reinforcement learning (RL). The model is available as open-weight on Hugging Face and on ModelScope under an Apache 2.0 license. This means it’s available for commercial and research uses, so enterprises can employ it immediately to power their products and applications (even ones they charge customers to use). It can also be accessed for individual users via Qwen Chat. Qwen-with-Questions was Alibaba’s answer to OpenAI’s original reasoning model o1 QwQ, short for Qwen-with-Questions, was first introduced by Alibaba in November 2024 as an open-source reasoning model aimed at competing with OpenAI’s o1-preview. At launch, the model was designed to enhance logical reasoning and planning by reviewing and refining its own responses during inference, a technique that made it particularly effective in math and coding tasks. The initial version of QwQ released back in November 2024 (called simply, “QwQ”) featured 32 billion parameters as well, and a 32,000-token context length. Alibaba highlighted its ability to outperform o1-preview in mathematical benchmarks like AIME and MATH, as well as scientific reasoning tasks such as GPQA. Despite its strengths, QwQ’s early iterations struggled with programming benchmarks like LiveCodeBench, where OpenAI’s models maintained an edge. Additionally, as with many emerging reasoning models, QwQ faced challenges such as language mixing and occasional circular reasoning loops. However, Alibaba’s decision to release the model under an Apache 2.0 license ensured that developers and enterprises could freely adapt and commercialize it, distinguishing it from proprietary alternatives like OpenAI’s o1. Since QwQ’s initial release, the AI landscape has evolved rapidly. The limitations of traditional LLMs have become more apparent, with scaling laws yielding diminishing returns in performance improvements. This shift has fueled interest in large reasoning models (LRMs) — a new category of AI systems that use inference-time reasoning and self-reflection to enhance accuracy. These include OpenAI’s o3 series and the massively successful DeepSeek-R1 from rival Chinese lab DeepSeek, an offshoot of Hong Kong quantitative analysis firm High-Flyer Capital Management. A new report from web traffic analytics and research firm SimilarWeb found that since the launch of R1 back in January 2024, DeepSeek has rocketed up the charts to become the most-visited AI model-providing website behind OpenAI. Credit: SimilarWeb, AI Global Global Sector Trends on Generative AI QwQ-32B, Alibaba’s latest iteration, builds on these advancements by integrating RL and structured self-questioning, positioning it as a serious competitor in the growing field of reasoning-focused AI. The context length of the new model has been extended to 131,000 tokens, as well — similar to the 128,000 of OpenAI’s models and many others, though Google Gemini 2.0’s context remains superior at 2 million tokens. (Recall context refers to the number of tokens that the LLM can input/output in a single interaction, with higher token count meaning more information. 131,000 tokens is equivalent to around a 300-page book. Scaling up performance with multi-stage reinforcement learning Traditional instruction-tuned models often struggle with difficult reasoning tasks, but the Qwen Team’s research suggests that RL can significantly improve a model’s ability to solve complex problems. QwQ-32B builds on this idea by implementing a multi-stage RL training approach to enhance mathematical reasoning, coding proficiency and general problem-solving. The model has been benchmarked against leading alternatives such as DeepSeek-R1, o1-mini and DeepSeek-R1-Distilled-Qwen-32B, demonstrating competitive results despite having fewer parameters than some of these models. For example, while DeepSeek-R1 operates with 671 billion parameters (with 37 billion activated), QwQ-32B achieves comparable performance with a much smaller footprint — typically requiring 24 GB of vRAM on a GPU (Nvidia’s H100s have 80GB) compared to more than 1500 GB of vRAM for running the full DeepSeek R1 (16 Nvidia A100 GPUs) — highlighting the efficiency of Qwen’s RL approach. QwQ-32B follows a causal language model architecture and includes several optimizations: 64 transformer layers with RoPE, SwiGLU, RMSNorm and Attention QKV bias; Generalized query attention (GQA) with 40 attention heads for queries and 8 for key-value pairs; Extended context length of 131,072 tokens, allowing for better handling of long-sequence inputs; Multi-stage training including pretraining, supervised fine-tuning and RL. The RL process for QwQ-32B was executed in two phases: Math and coding focus: The model was trained using an accuracy verifier for mathematical reasoning and a code execution server for coding tasks. This approach ensured that generated answers were validated for correctness before being reinforced. General capability enhancement: In a second phase, the model received reward-based training using general reward models and rule-based verifiers. This stage improved instruction following, human alignment and agent reasoning without compromising its math and coding capabilities. What it means for enterprise decision-makers For enterprise leaders—including CEOs, CTOs, IT leaders, team managers and AI application developers—QwQ-32B represents a potential shift in how AI can support business decision-making and technical innovation. With its RL-driven reasoning capabilities, the model can provide more accurate, structured and context-aware insights, making it valuable for use cases such as automated data analysis, strategic planning, software development and intelligent automation. Companies looking to deploy AI solutions for complex problem-solving, coding assistance, financial modeling or customer service automation may find QwQ-32B’s efficiency an attractive option. Additionally, its open-weight availability allows organizations to fine-tune and customize the model for domain-specific applications without proprietary restrictions, making it a flexible choice for enterprise AI strategies. The fact that it comes from a Chinese e-commerce giant may raise some security and bias concerns for some non-Chinese users, especially when using the Qwen Chat interface. But as with DeepSeek-R1, the fact that the model is available on Hugging Face for download and offline usage and fine-tuning or retraining suggests that these can be overcome fairly easily. And it is a viable alternative to DeepSeek-R1. Early reactions from AI power users and influencers The release

Alibaba’s new open source model QwQ-32B matches DeepSeek-R1 with way smaller compute requirements Read More »

Broadcasters Say Next-Gen TV Could Back Up GPS

By Christopher Cole ( March 19, 2025, 5:24 PM EDT) — Broadcasters told federal regulators the impending transition to next-generation TV could come with an added benefit — the creation of a broadcast spectrum-based backup to the Global Positioning System…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Broadcasters Say Next-Gen TV Could Back Up GPS Read More »

Beyond RAG: SEARCH-R1 integrates search engines directly into reasoning models

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Large language models (LLMs) have seen remarkable advancements in using reasoning capabilities. However, their ability to correctly reference and use external data — information that they weren’t trained on — in conjunction with reasoning has largely lagged behind.  This is an issue especially when using LLMs dynamic, information-intensive scenarios that demand up-to-date data from search engines. But an improvement has arrived: SEARCH-R1, a technique introduced in a paper by researchers at the University of Illinois at Urbana-Champaign and the University of Massachusetts Amherst, trains LLMs to generate search queries and seamlessly integrate search engine retrieval into their reasoning.  With enterprises seeking ways to integrate these new models into their applications, techniques such as SEARCH-R1 promise to unlock new reasoning capabilities that rely on external data sources. The challenge of integrating search with LLMs Search engines are crucial for providing LLM applications with up-to-date, external knowledge. The two main methods for integrating search engines with LLMs are Retrieval-Augmented Generation (RAG) and tool use, implemented through prompt engineering or model fine-tuning.  However, both methods have limitations that make them unsuitable for reasoning models. RAG often struggles with retrieval inaccuracies and lacks the ability to perform multi-turn, multi-query retrieval, which is essential for reasoning tasks.  Prompting-based tool use often struggles with generalization, while training-based approaches require extensive, annotated datasets of search-and-reasoning interactions, which are difficult to produce at scale. (In our own experiments with reasoning models, we found that information retrieval remains one of the key challenges.)  SEARCH-R1 SEARCH-R1 enables LLMs to interact with search engines during their reasoning process as opposed to having a separate retrieval stage. SEARCH-R1 defines the search engine as part of the LLM’s environment, enabling the model to integrate its token generation with search engine results seamlessly.  The researchers designed SEARCH-R1 to support iterative reasoning and search. The model is trained to generate separate sets of tokens for thinking, search, information, and answer segments. This means that during its reasoning process (marked by <think></think> tags), if the model determines that it needs external information, it generates a <search></search> sequence that contains the search query. The query is then passed on to a search engine and the results are inserted into the context window in an <information></information> segment. The model then continues to reason with the added context and when ready, generates the results in an <answer></answer> segment. This structure allows the model to invoke the search engine multiple times as it reasons about the problem and obtains new information (see example below). Example of LLM reasoning with SEARCH-R1 (source: arXiv) Reinforcement learning Training LLMs to interleave search queries with their reasoning chain is challenging. To simplify the process, the researchers designed SEARCH-R1 to train the model through pure reinforcement learning (RL), where the model is left to explore the use of reasoning and search tools without guidance from human-generated data. SEARCH-R1 uses an “outcome-based reward model,” in which the model is only evaluated based on the correctness of the final response. This eliminates the need for creating complex reward models that verify the model’s reasoning process. This is the same approach used in DeepSeek-R1-Zero, where the model was given a task and only judged based on the outcome. The use of pure RL obviates the need to create large datasets of manually annotated examples (supervised fine-tuning). “SEARCH-R1 can be viewed as an extension of DeepSeek-R1, which primarily focuses on parametric reasoning by introducing search-augmented RL training for enhanced retrieval-driven decision-making,” the researchers write in their paper. SEARCH-R1 in action The researchers tested SEARCH-R1 by fine-tuning the base and instruct versions of Qwen-2.5 and Llama-3.2 and evaluating them on seven benchmarks encompassing a diverse range of reasoning tasks requiring single-turn and multi-hop search. They compared SEARCH-R1 against different baselines:‌ direct inference with Chain-of-Thought (CoT) reasoning, inference with RAG, and supervised fine-tuning for tool use. SEARCH-R1 consistently outperforms baseline methods by a fair margin. It also outperforms reasoning models trained on RL but without search retrieval. “This aligns with expectations, as incorporating search into LLM reasoning provides access to relevant external knowledge, improving overall performance,” the researchers write. SEARCH-R1 is also effective for different model families and both base and instruction-tuned variants, suggesting that RL with outcome-based rewards can be useful beyond pure reasoning scenarios. The researchers have released the code for SEARCH-R1 on GitHub. SEARCH-R1’s ability to autonomously generate search queries and integrate real-time information into reasoning can have significant implications for enterprise applications. It can enhance the accuracy and reliability of LLM-driven systems in areas such as customer support, knowledge management, and data analysis. By enabling LLMs to dynamically adapt to changing information, SEARCH-R1 can help enterprises build more intelligent and responsive AI solutions. This capability can be very helpful for applications that require access to constantly changing data, and that require multiple steps to find an answer.  It also suggests that we have yet to explore the full potential of the new reinforcement learning paradigm that has emerged since the release of DeepSeek-R1. source

Beyond RAG: SEARCH-R1 integrates search engines directly into reasoning models Read More »

2. What is the best age to have a first child?

People across the 18 mostly middle-income countries surveyed say, on average, that 26.1 is the best age to have a first child. There is a lot of agreement on this timing, and in most countries, average ideal ages fall between 25 and 27. But there are some that stand out. People in Tunisia say the ideal age to have a first child is just under 30, on the higher end of the average ages suggested. And adults in Argentina say it is best to have a child at 27.7 years old. By comparison, people in Bangladesh and South Africa say the ideal age to have a first child is before 25. Women in the countries surveyed generally do become mothers in their late 20s or early 30s, according to data from the United Nations. This is somewhat older than the average ideal age people suggest in our survey overall. Women in Bangladesh, Colombia and Mexico typically have their first child at around 26 years old, slightly younger than women in the other countries surveyed. And in Chile and Tunisia, the average age at which women have their first child is 30 or older.  Refer to Appendix A for actual average ages at first birth in each country. There is generally a lot of agreement within countries, too, about the best age to have a first child. In 12 of the 18 countries surveyed, at least 40% of adults think sometime between the ages of 25 and 29 is ideal. Indonesians show a particular consensus: 58% say the ideal age is in this range; roughly a quarter say between 20 and 24 is best, and very few think it’s ideal to have a first child outside of one’s 20s. Some countries have much less agreement. Responses in Tunisia, for example, are more evenly spread across age ranges. Around a third of Tunisian adults think that the best age to have a child is between 30 and 34. And of the countries surveyed, Tunisia has the largest share of people who think age 35 or older is ideal for this milestone (16%). Views by gender, age and education Men generally suggest a slightly older ideal age for having a first child than women. The average ages men say are best range from 24.8 in South Africa to 31.4 in Tunisia, while the average ages women choose range from 22.1 in Bangladesh to 28.5 in Tunisia. The Philippines is the only country where men and women agree on the best age for this milestone (25.2). Views of the best age for having a first child also vary by the age of respondents themselves. In most countries, adults under 35 think it is ideal to have a child slightly later in life, compared with adults ages 50 and older. In Peru, for instance, younger adults think the best age to become a parent is 27.7, while older adults suggest 25.0 (a gap of 2.7 years). Opinions also vary by education. In 17 of the 18 countries surveyed, adults with more education say is it best to have a first child slightly later in life, compared with those who have less education. The gap is particularly stark in Latin American countries: Chile: The average ideal age suggested by people with more education is 3.5 years older. Argentina: 3.2 years older Mexico: 3.2 years older Colombia: 3.1 years older Peru: 3.0 years older Brazil: 2.6 years older source

2. What is the best age to have a first child? Read More »

These $140 iPads Are Perfect for Work, but Selling Out Fast

TL;DR: Get a refurbished iPad 7th Gen and faux leather case for $139.99 while supplies last — fewer than 50 are left in stock (reg. $249.99). Not every business needs top-of-the-line tech. Sometimes, practical and affordable wins the race. If you or your team need affordable tablets, consider shopping refurbished devices. You’ll save hundreds of dollars and help out the environment, too. Take this Apple iPad 7th Gen as an example. While it originally retailed for $249.99, you can get one for just $139.99 — only until they’re sold out. At a price this low, we expect them to go fast, so order yours before they’re gone. More about this refurbished iPad deal You’re saving $110 on this iPad because it had another life before arriving at your door, but that doesn’t mean it’s used. The iPads are in grade “A” condition, the highest rating we give to devices, meaning you may not even notice a single scratch. Besides, this iPad delivers what you or your employees need, whether the tablet will be used in the field, as a POS device, for note-taking, or simply attending meetings remotely. While the 7th Gen iPad isn’t the newest model available, it updates to the latest iPadOS for continued security improvements. It also has the basic Apple Home Button and a large screen to keep things simple for those who aren’t super tech-savvy. Take a look at the rest of this iPad’s features: 32GB of storage. Up to 10 hours of battery life. High-quality front and rear cameras. Faux leather cases included with purchase. Don’t miss out: Get your refurbished iPad deal for $139.99 before they’re sold out (reg. $249.99). Apple iPad 7th Gen (2019) 32GB Wi-Fi Space Gray with Case & Charger (Refurbished) – $139.99 StackSocial prices subject to change. source

These $140 iPads Are Perfect for Work, but Selling Out Fast Read More »

Why cloud is integral to Japan Rugby Football Union’s media strategy

Based on this new strategy, JRFU has a business co-creation partner framework with J Sports to produce official videos for League One, and since last year, it also covers the men’s 15-man national rugby union matches. Partnering with AWS Amazon Web Services plays an important role in Japan’s rugby media strategy, including AWS Elemental Live, which encodes live video from the matches and uploads it to the cloud, and AWS Elemental MediaLive, a live video processing service that encodes streaming video. Video content is then stored in Amazon S3, and indexing is possible to preview and search. Agility and better economics are common incentives for organizations to move to the cloud. But the overall appeal of the ecosystem is popular, too, and the fact the services they want to use, such as the remote comment system Spalk, are provided on AWS is another unique feature. This has the advantage of making video transfer smoother. And by realizing an end-to-end system on AWS, JRFU is able to reduce development man hours, such as compatibility testing, and use managed services to reduce the management burden. In addition, the video and photo archive system built in collaboration with AWS allows media exposure during and immediately after matches. Real-time match footage can be distributed on official SNS accounts and provided to other media. In the 2023-24 season, for example, one match was live-streamed per week on the League One official website. Although the foundation for using video was set, only a few teams used it at first and there were other promotion challenges to overcome. source

Why cloud is integral to Japan Rugby Football Union’s media strategy Read More »

Turn Canva and OpenAI into Your Secret Weapons for Bulk Email and Content Creation

Image: StackCommerce TL;DR: Automate your email marketing and bulk content creation with AI-powered tools in this six-course Canva and ChatGPT bundle for $24.99 (reg. $120). As a solopreneur, your time is too valuable to waste designing every email and marketing asset manually, but did you know there are ways you can use AI automation to streamline those tedious tasks? Instead of spending hours crafting individual emails or designing graphics from scratch, Canva’s tools, powered by OpenAI, streamline the entire process. This beginner-friendly 2025 six-course Canva bundle teaches you how to leverage Canva’s AI features for bulk email marketing, content automation, and branding, helping you create high-quality assets in minutes without needing a design or copywriting team. Six courses to automate, design, and scale your business marketing Stop writing the same email repeatedly after taking the Canva & ChatGPT Hacks for Bulk Content Creation course. You’ll learn how to use ChatGPT to generate high-converting email sequences, newsletters, and promotions in bulk, then pair them with Canva’s AI tools to create branded visuals that match your messaging instantly. Automate your marketing without sacrificing quality or consistency. Your emails shouldn’t just deliver a message; they should sell your brand. Canva for Branding, Business & Marketing will show you how to design reusable email templates, social media graphics, and promotional materials that keep your branding consistent across every campaign. Whether sending weekly offers or nurturing leads, this course helps you create professional, on-brand content in minutes. Additional courses cover streamlining your content creation beyond just emails and branding; Canva & AI for Viral Short-Form Content Creation teaches you how to automate video content for social media, making it easy to align your marketing efforts across platforms. Canva for Innovative Graphic Design helps you create AI-powered logos, typography, and visuals, ensuring brand consistency in every piece of marketing material you produce. Canva to Create Business Cards allows you to design a professional, reusable digital business card perfect for email signatures and networking events. Content Creation in Bulk with Canva & ChatGPT walks you through automating graphics, captions, and branded visuals for both email and social media marketing. If you want to work smarter, not harder, this bundle teaches you how to leverage AI and automation to scale your business while reducing manual work. Get the 2025 six-course Canva AI bundle for just $24.99 (reg. $120) and start automating your content creation today. Note: Many AI-powered features covered in these courses require a Canva Pro subscription, which is not included. StackSocial prices subject to change. source

Turn Canva and OpenAI into Your Secret Weapons for Bulk Email and Content Creation Read More »