Borderless AI emerges from stealth with $32M in funding to disrupt HR tech

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A new artificial intelligence startup is betting that HR departments will become the next major battleground for enterprise AI adoption, launching a specialized search engine that aims to transform how companies manage their workforce. Borderless AI, which emerged from stealth last year, announced today the release of HRGPT, a free AI-powered search engine that allows companies to query their internal HR data alongside employment laws and regulations. The company also disclosed a $5 million strategic investment from AI company Cohere, bringing its total seed funding to $32 million. “Every HR department is going to have AI agents that manage various aspects across the HR stack,” said Willson Cross, cofounder and CEO of Borderless AI, in an exclusive interview with VentureBeat. “We’re proud to be at the forefront of that vertical.” How Borderless AI’s HRGPT is transforming workforce management The Toronto-based startup is positioning itself to compete with established HR software providers like Workday and ADP by focusing exclusively on AI-powered solutions. Its platform already counts several multinational companies as customers, including Dunlop Sporting Goods, which uses the technology to manage employee onboarding across 17 global offices. Unlike general-purpose AI chatbots, HRGPT combines real-time web search with access to internal company data and specialized HR knowledge. The system can perform tasks ranging from generating employment agreements to tracking time-off requests and managing international expense reimbursements. “Unlike ChatGPT, we have real-time web search. When a customer asks HRGPT a question, it scans the web for real-time sourcing and citations,” Cross told VentureBeat. The platform also integrates with PricewaterhouseCoopers for employment law expertise. Borderless AI’s platform displays employee time-off requests and compliance data in a conversational interface designed for HR professionals. (Credit: Borderless AI) The investment from Cohere signals growing interest in vertical-specific AI applications for the enterprise. While consumer AI tools like ChatGPT have captured public attention, Cross believes the next wave of AI adoption will come from businesses. “For the next two to three years, it’s going to be the businesses that are catching up and waking up to bringing AI to their organizations,” he said. “HR is one that has many applicable use cases.” Borderless AI’s approach reflects a broader trend of AI companies focusing on specific industries rather than trying to build general-purpose tools. Similar vertical-focused companies include Harvey AI in legal tech and Sierra in customer service. Building a billion-dollar HR tech company with AI at its core The company’s ambitious vision includes automating complex HR processes like payroll management and employee analytics. Cross indicated they aim to build a billion-dollar company with fewer than 50 employees by leveraging AI extensively in their own operations. However, Borderless AI faces significant challenges, including prioritizing which features to build next amid strong customer demand. The company must also maintain accuracy and compliance in its automated HR functions, particularly for sensitive tasks like employment agreements and international payments. The startup’s success could signal whether specialized AI tools will successfully compete against established enterprise software providers who are racing to add AI capabilities to their existing products. For now, early customers appear convinced: Borderless AI reports that its AI agents perform tasks hourly across its customer base. source

Borderless AI emerges from stealth with $32M in funding to disrupt HR tech Read More »

Microsoft promotes American-first AI

To this end, the Microsoft exec referred to the company’s announced plan to invest more than $35 billion in 14 countries within three years “to build trusted and secure AI and cloud datacenter infrastructure.” According to Smith, Microsoft’s global infrastructure now reaches 40 countries, including the global south, “including in the Global South, where China has frequently focused so many of its Belt and Road investments.” To build on this, Smith is calling for more political support, writing, “the most important U.S. public policy priority should be to ensure that the U.S. private sector can continue to advance with the wind at its back.” The United States can’t afford to “slow its own private sector with heavy-handed regulations,” Smith adds, calling for a “pragmatic export control policy.” After all, the aim is to “expand rapidly and provide a reliable source of supply to the many countries that are American allies and friends.”  Europe between IT dependence and the desire for sovereignty Whether these allies and friends will join in the AI ​​game outlined by Smith is questionable, however, despite Microsoft publicly announcing billions in investments in European infrastructure last year, including €3.2 billion in Germany. With the AI ​​Act, the EU has passed a set of rules that prescribes clear guidelines for the use of AI in Europe. AI lobbyists are currently haggling over the final wording to pull the teeth out of the regulation in the interests of their own business.  source

Microsoft promotes American-first AI Read More »

SEC's Last-Minute Musk Suit Could Be Scuttled Under Trump

By Jessica Corso ( January 15, 2025, 10:48 PM EST) — The U.S. Securities and Exchange Commission’s latest lawsuit against Elon Musk is unlikely to be viewed favorably by the incoming administration of President-elect Donald Trump, which may press for a lesser penalty or even move to dismiss the case outright, attorneys told Law360 on Wednesday…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

SEC's Last-Minute Musk Suit Could Be Scuttled Under Trump Read More »

MiniMax unveils its own open source LLM with industry-leading 4M token context

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More MiniMax is perhaps today best known here in the U.S. as the Singaporean company behind Hailuo, a realistic, high-resolution generative AI video model that competes with Runway, OpenAI’s Sora and Luma AI’s Dream Machine. But the company has far more tricks up its sleeve: Today, for instance, it announced the release and open-sourcing of the MiniMax-01 series, a new family of models built to handle ultra-long contexts and enhance AI agent development. The series includes MiniMax-Text-01, a foundation large language model (LLM), and MiniMax-VL-01, a visual multimodal model. A massive context window MiniMax-Text-o1, is of particular note for enabling up to 4 million tokens in its context window — equivalent to a small library’s worth of books. The context window is how much information the LLM can handle in one input/output exchange, with words and concepts represented as numerical “tokens,” the LLM’s own internal mathematical abstraction of the data it was trained on. And, while Google previously led the pack with its Gemini 1.5 Pro model and 2-million-token context window, MiniMax remarkably doubled that. As MiniMax posted on its official X account today: “MiniMax-01 efficiently processes up to 4M tokens — 20 to 32 times the capacity of other leading models. We believe MiniMax-01 is poised to support the anticipated surge in agent-related applications in the coming year, as agents increasingly require extended context handling capabilities and sustained memory.” The models are available now for download on Hugging Face and Github under a custom MiniMax license, for users to try directly on Hailuo AI Chat (a ChatGPT/Gemini/Claude competitor), and through MiniMax’s application programming interface (API), where third-party developers can link their own unique apps to them. MiniMax is offering APIs for text and multi-modal processing at competitive rates: $0.2 per 1 million input tokens $1.1 per 1 million output tokens For comparison, OpenAI’s GPT-4o costs $2.50 per 1 million input tokens through its API, a staggering 12.5X more expensive. MiniMax has also integrated a mixture of experts (MoE) framework with 32 experts to optimize scalability. This design balances computational and memory efficiency while maintaining competitive performance on key benchmarks. Striking new ground with Lightning Attention Architecture At the heart of MiniMax-01 is a Lightning Attention mechanism, an innovative alternative to transformer architecture. This design significantly reduces computational complexity. The models consist of 456 billion parameters, with 45.9 billion activated per inference. Unlike earlier architectures, Lightning Attention employs a mix of linear and traditional SoftMax layers, achieving near-linear complexity for long inputs. SoftMax, for those like myself who are new to the concept, are the transformation of input numerals into probabilities adding up to 1, so that the LLM can approximate which meaning of the input is likeliest. MiniMax has rebuilt its training and inference frameworks to support the Lightning Attention architecture. Key improvements include: MoE all-to-all communication optimization: Reduces inter-GPU communication overhead. Varlen ring attention: Minimizes computational waste for long-sequence processing. Efficient kernel implementations: Tailored CUDA kernels improve Lightning Attention performance. These advancements make MiniMax-01 models accessible for real-world applications, while maintaining affordability. Performance and benchmarks On mainstream text and multimodal benchmarks, MiniMax-01 rivals top-tier models like GPT-4 and Claude-3.5, with especially strong results on long-context evaluations. Notably, MiniMax-Text-01 achieved 100% accuracy on the Needle-In-A-Haystack task with a 4-million-token context. The models also demonstrate minimal performance degradation as input length increases. MiniMax plans regular updates to expand the models’ capabilities, including code and multi-modal enhancements. The company views open-sourcing as a step toward building foundational AI capabilities for the evolving AI agent landscape. With 2025 predicted to be a transformative year for AI agents, the need for sustained memory and efficient inter-agent communication is increasing. MiniMax’s innovations are designed to meet these challenges. Open to collaboration MiniMax invites developers and researchers to explore the capabilities of MiniMax-01. Beyond open-sourcing, its team welcomes technical suggestions and collaboration inquiries at [email protected]. With its commitment to cost-effective and scalable AI, MiniMax positions itself as a key player in shaping the AI agent era. The MiniMax-01 series offers an exciting opportunity for developers to push the boundaries of what long-context AI can achieve. source

MiniMax unveils its own open source LLM with industry-leading 4M token context Read More »

AI tools to elevate your job search in 2025

More than half of knowledge workers now use generative AI weekly, according to a recent piece of research from Asana’s Work Innovation Lab, in partnership with Anthropic. The study also found that takeup ramped up by 44% over nine months in 2024. And those who use AI daily benefit most. Eighty-nine percent reported a productivity boost, whereas casual monthly users only saw a 39% increase in productivity. The report also found that knowledge workers believe generative AI has the potential to automate 31% of their job responsibilities. And the more ways they use AI tools at work, the more possibilities they see. 8 jobs to discover this week Full stack AI developer, Witteveen+Bos, Overijssel Data Analyst AI Team, Lely, Zuid-Holland AI Consultant, Refreshworks, Den Haag Software Engineer C#, Profield, Gelderland Java Software Engineer, BKWI, Provincie Utrecht Python Developer, H2B IT Solutions, Noord-Holland Senior DevOps Engineer – Microsoft 365 Specialist, Cognizant, Noord-Holland Platform Engineer MSI, Schiphol Group, Haarlemmermeer “Already, knowledge workers are deploying AI across an average of five different use cases at work, from technical writing to idea generation and brainstorming, demonstrating AI’s versatility across various workflows,” the study’s authors say. “As workers apply AI to a broader range of tasks, they discover innovative ways to enhance their work that they might not have initially considered. This leads them to find new applications for AI, creating a virtuous cycle of AI-powered productivity: the more you use it, the more you find new ways to use it, and the more productive you become.” Of course, these use cases differ across industries, with those working in technology most likely to use generative AI for technical writing, for example. Those working in financial services are more likely to use it for process automation and it won’t be a surprise to find that workers in the media and entertainment sectors gravitate towards tools for image generation. To date, only about 31% of companies have a formal AI strategy in place, which means that in many cases, workers’ usage of genAI tools is unregulated and has led to the rise of the ‘BYOAI’ trend, AKA bring your own AI to work. One way all workers can leverage the use of generative AI tools (regardless of their employers’ stance), is in looking for a new role. Within recruitment, automation is taking over, and software is now doing much of what humans once managed, like sourcing, outreach, and application filtering. Some companies are even using AI to conduct job interviews, with mixed results. In the US, a case was filed last year concerning pharmacy chain CVS. As part of its application process, the company utilises video-interview technology which uses artificial intelligence for analysis. The plaintiff alleged that CVS broke Massachusetts law because it did not provide an opt-out. Amplifying your job search While there may be downsides, the use of generative AI when it comes to job seeking is a net positive. Consider the Reddit user, for example, who recently created an AI bot that was used to automatically apply to 1,000 jobs, with the result being 50 interviews in one month. That’s far more than what many job hunters can expect using traditional career search methods. The user, who subsequently deleted their Reddit account, said at the time that: “The tailored CVs and cover letters, customized based on each job description, made a significant difference.” Speed and accuracy matter, and on the House of Talent Job Board, a new conversational AI job search agent can help you locate your next tech position quickly and accurately. Find the agent on the bottom right-hand side of your screen where it will allow you to search for best-matched jobs using your CV. Or, you can tell it a bit about yourself, your skills, your current location—or where you’d like to work. Once you’ve isolated the best roles to apply for, generational AI can be tasked with optimising your application materials thanks to its time-saving capabilities. AI tools can help you to make fewer grammatical mistakes, align your experience effectively against the actual job description, and essentially speed up the whole process. Perplexity or ChatGPT can be used to quickly compare your CV against a job ad, outputting areas you need to finesse or skills you should highlight, helping you to optimize application materials for each role you apply for. If you’ve ever considered sliding into a recruiter’s DMs on LinkedIn, for example, or sending an email to a hiring manager on spec, then this is another area in which genAI can help. Claude, for example, can help you compose succinct, effective messages or emails you can then edit to make sure they’re completely on point. Cover letters are another time-consuming element of a job hunt that many find daunting. Many job applicants simply don’t bother unless it’s a specific requirement. However, hiring managers like cover letters because they add additional context to your CV. You can showcase your motivation and desire for the role, along with more intangible talents such as your soft skills. The good news is that this process can also be simplified by prompting a Gen AI tool to create a cover letter based on your CV. This framework can then be padded out as you see fit — add in additional experience or KPIs you succeeded with, along with an explanation of why you’d really love the job. And that’s not all. AI can help you research companies, positions, and terminology ahead of job interviews, helping you prepare. You can also use an AI tool as a sounding board for interview preparation, by asking it to generate sample questions for a software engineering role, for example. But no matter what tools or platforms you use, it’s incumbent on you to check the outputs. Generative AI tools are great assistants, but you’re in the driving seat. Ready to look for a new tech role? Check out The Next Web Job Board now source

AI tools to elevate your job search in 2025 Read More »

Outgoing FCC Chair Touts 'Wins On The Board'

With less than a week left in office, the chief of the Biden-era Federal Communications Commission on Wednesday highlighted the accomplishments of her tenure, including efforts to connect more Americans and advance space-based communications, but warned that a number of problems ranging from cybersecurity threats to the digital divide persist. source

Outgoing FCC Chair Touts 'Wins On The Board' Read More »

Do new AI reasoning models require new approaches to prompting?

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The era of reasoning AI is well underway. After OpenAI once again kickstarted an AI revolution with its o1 reasoning model introduced back in September 2024 — which takes longer to answer questions but with the payoff of higher performance, especially on complex, multi-step problems in math and science — the commercial AI field has been flooded with copycats and competitors. There’s DeepSeek’s R1, Google Gemini 2 Flash Thinking, and just today, LlamaV-o1, all of which seek to offer similar built-in “reasoning” to OpenAI’s new o1 and upcoming o3 model families. These models engage in “chain-of-thought” (CoT) prompting — or “self-prompting” — forcing them to reflect on their analysis midstream, double back, check over their own work and ultimately arrive at a better answer than just shooting it out of their embeddings as fast as possible, as other large language models (LLMs) do. Yet the high cost of o1 and o1-mini ($15.00/1M input tokens vs. $1.25/1M input tokens for GPT-4o on OpenAI’s API) has caused some to balk at the supposed performance gains. Is it really worth paying 12X as much as the typical, state-of-the-art LLM? As it turns out, there are a growing number of converts — but the key to unlocking reasoning models’ true value may lie in the user prompting them differently. Shawn Wang (founder of AI news service Smol) featured on his Substack over the weekend a guest post from Ben Hylak, the former Apple Inc., interface designer for visionOS (which powers the Vision Pro spatial computing headset) and co-founder of Dawn, an analytics and diagnostics platform for AI products. The post has gone viral, as it convincingly explains how Hylak prompts OpenAI’s o1 model to receive incredibly valuable outputs (for him). In short, instead of the human user writing prompts for the o1 model, they should think about writing “briefs,” or more detailed explanations that include lots of context up-front about what the user wants the model to output, who the user is and what format in which they want the model to output information for them. As Hylak writes on Substack: With most models, we’ve been trained to tell the model how we want it to answer us. e.g. ‘You are an expert software engineer. Think slowly and carefully“ This is the opposite of how I’ve found success with o1. I don’t instruct it on the how — only the what. Then let o1 take over and plan and resolve its own steps. This is what the autonomous reasoning is for, and can actually be much faster than if you were to manually review and chat as the “human in the loop”. Hylak also includes a great annotated screenshot of an example prompt for o1 that produced a useful results for a list of hikes: This blog post was so helpful, OpenAI’s own president and co-founder Greg Brockman re-shared it on his X account with the message: “o1 is a different kind of model. Great performance requires using it in a new way relative to standard chat models.” I tried it myself on my recurring quest to learn to speak fluent Spanish and here was the result, for those curious. Perhaps not as impressive as Hylak’s well-constructed prompt and response, but definitely showing strong potential. Separately, even when it comes to non-reasoning LLMs such as Claude 3.5 Sonnet, there may be room for regular users to improve their prompting to get better, less constrained results. As Louis Arge, former Teton.ai engineer and current creator of neuromodulation device openFUS, wrote on X, “one trick i’ve discovered is that LLMs trust their own prompts more than my prompts,” and provided an example of how he convinced Claude to be “less of a coward” by first “trigger[ing] a fight” with him over its outputs. All of which goes to show that prompt engineering remains a valuable skill as the AI era wears on. source

Do new AI reasoning models require new approaches to prompting? Read More »

Improving CX Can Drive More Than One Billion Dollars In Revenue (2024)

Each year, we calculate how much business growth improving Forrester’s Customer Experience Index (CX Index™) by one point drives. For 2024, we published the results in the report, How Customer Experience Drives Business Growth, 2024. The report includes the dollar upside of improving CX Index by one point for 12 industries: airlines, luxury auto manufacturers, mass-market auto manufacturers, auto/home insurers, multichannel banks, direct banks, credit card issuers, health insurers, midscale hotels, upscale hotels, investment firms, and retailers. A Sneak Peek Into The Business Growth From CX In 2024 The benefits of improving CX can be massive. For example, for a mass-market auto manufacturer, improving CX by one point can lead to more than $1 billion in additional revenue — this is because improving CX increases the chance that customers will buy their next car from the same brand and take the car to the brand’s dealership for service needs. For an auto/home insurer, it’s close to $370 million. In many industries, the upside of making a happy customer even happier is higher than that of placating an unhappy customer. This is because the growth benefits of improving CX increase exponentially when going from “good” to “excellent” for those industries, which include some financial services industries. CX pros in firms where this relationship holds true must focus on identifying CX drivers that move customers from “OK” and “good” CX scores to “excellent” scores. The effect of recommendations on the business upside of CX is small. For each of the industries in our analysis, acquiring new customers via recommendations accounts for less than 7% of the overall business benefit from improved CX. Calculate These Numbers For Your Own Firm Should you use our numbers to communicate the value of CX in your firm? Yes and no. Use them to get initial buy-in that CX drives business results. But don’t just assume that your numbers will look the same. Instead, calculate the business upside of CX for your own firm. Here is how we calculated it — hopefully, you will find this useful: 1. Calculate what each customer is worth, depending on how loyal they plan to be. Forrester’s Customer Experience Benchmark Survey measures the quality of customers’ experiences and their loyalty intentions. Together with other data, we calculate a revenue potential for each customer. How we calculate it depends on how companies in each industry make money. 2. Create models that link CX Index and revenue potential. Our analysis shows, for each industry, the effect of CX changes on business outcomes. We also found out whether that effect changes based on whether we go from a low CX Index score to a medium one or from a medium one to a high one. 3. Calculate the upside of improving CX by one point. Our model shows the impact on a customer’s revenue potential when the CX Index score of the industry rises by one point. We then multiplied that per-customer upside by the number of customers of a big brand in the industry (we focused on the largest brands in the CX Index in each industry, as a few big brands dominate each industry that we investigate). Forrester Clients: Use Our Five-Step Solution Blueprint To Calculate The Business Impact Of CX The image below shows step one. Click Prove That CX Efforts Produced Business Results for all details.   Thank you for your major contribution to this research, James Williams! source

Improving CX Can Drive More Than One Billion Dollars In Revenue (2024) Read More »