PicsArt’s creative AI playbook: A vision for contextual intelligence, AI agents

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Whether you’re an Android or iOS person, most people have heard of PicsArt. The platform launched more than a decade ago and has become one of the go-to services for all things image and video editing, with more than 150 million monthly active users.  However, it hasn’t been an easy journey for the company. Despite being an early mover in the smartphone-based editing domain, the company has seen significant competition from players like Canva and Adobe who have been playing a cat-and-mouse game for quite some time—building their own similar products. When I spoke with Artavazd Mehrabyan, the CTO of the company, at the recent WCIT conference in Armenia, he was pretty vocal about the challenges, saying it is tough to be or at least stay different for long in this market. “A lot of things that PicsArt had before were copied into the competitors. PicsArt was the first all-in-one editing service on mobile. There was no other player before 2011. We started with this approach and it was copied, among many other things,” Mehrabyan said. He pointed out that the same is happening with AI, where competitors, including mainstream photo services, are offering very similar capabilities. For example, PicsArt offers object generation, allowing users to use advanced AI to create required photo elements. The same capability has also been incorporated into other products in the category, creating an overlap of sorts. PicsArt AI GIF generator However, instead of pushing to stand out by adding more tools to its existing batch of over two dozen AI capabilities, the company is looking to make a mark on users by improving the quality of what it is delivering. Specifically, Mehrabyan said, the focus is on how they are productizing and tailoring the features to help customers get to their goal – whether they want to remove a specific object from a vacation image or generate visually appealing advertisements, complete with images and copy. Training high-quality creative AI In the early stage, when AI was not a thing, Mehrabyan said most of PicsArt’s technology research and effort went towards making mobile-based editing seamless.  “It was very hard to get all these editing functionality working on the device offline. Then, the next challenge was to scale our ecosystem and infrastructure to support a surging user base. This took us to hybrid infrastructure. We started with multi-cloud and a data center, which, till now, continues to be the best solution as it’s more cost-efficient, highly performant and very flexible,” Mehrabyan explained. With this tech stack in place, the company launched its first AI feature in 2016, running a bunch of small models offline on user devices. This gradually transformed into a large-scale AI effort, with the company transforming into an AI-first organization and leveraging its infra and backend services to serve larger models and APIs for more enhanced capabilities like background removal/replacement. More recently, with the generative AI wave taking shape, PicsArt started training its own creative AI models from scratch. In the creative domain, it is very easy to lose a user. A small error here or there (leading to low-quality results) and there’s a good chance the person won’t come back again. To prevent this, PicsArt is extremely focused on the data side of things. It is selectively using data from its own network – marked by users as public and free to edit – for training the AI models. “We have a special ‘free to edit’ license. If you are posting publicly and tagging your image – from stock photo across any category to a sticker or background – as free to edit, it allows another user of the service to reuse or work on top of it. So, in essence, the user is contributing this image to the community and PicsArt itself,” Mehrabyan said. The license has been in place from the early days of the service and has given PicsArt a massive stock of user-generated content for training AI. However, as the CTO pointed out, not all of that is of high quality and ready to use right away. The data has to pass through multiple layers of cleansing and processing, from manual and AI-driven, to be transformed into a safe training-ready dataset. “At the end of this, we have quite a big dataset that is proprietary to PicsArt. We don’t need to have additional data,” he said. However, having a large volume of high-quality data in hand was just one part of the puzzle.  The real challenge for PicsArt, as Mehrabyan described, was to build the “data flywheel.” A self-reinforcing cycle covering not only data accessibility but also aspects like how to annotate data, how to use it and eventually how to leverage it as part of a continuous learning process to improve over time.  Establishing a feedback loop to achieve this was a long and complex process, he said. “We built our own annotation technology. We internally developed all related infrastructure and ecosystem technologies, including those for identifying and classifying images, tagging them and adding different types of labels to them,” Mehrabyan said. “Then, we created a team to help refine the pipeline and give feedback over time. It’s mostly been very automatic, AI-driven with human feedback in between so that we can have continuous improvement.” Feedback loop leads to contextual intelligence While the human-driven feedback loop has been a critical part in improving PicsArt’s products – enhancing the quality of the outputs they generate – it is also taking the company towards what Mehrabyan calls “contextual intelligence” or the ability of the platform to understand user needs and deliver exactly what they want.  This function is particularly important for the platform’s growing base of business-focused users who are looking to get work done right on their smartphones. Whether that’s generating graphics or a full-fledged ad for a social media campaign. The platform is still mostly used by

PicsArt’s creative AI playbook: A vision for contextual intelligence, AI agents Read More »

Airbnbs are being booked in grassroots campaign to support Ukraine locals

Airbnb hosts in Ukraine are receiving bookings for rentals. Guests have no intention of staying – they say they just want to get money into the hands of locals.  As Russia’s invasion shows no signs of stopping despite heavy economic sanctions imposed by the EU, companies ranging from Oracle to Apple suspending business in the country, bans on travel, and the seizure of assets belonging to oligarchs, Airbnb users have come up with a grassroots movement to help those on the ground.  Across social media networks including Facebook, Twitter, and TikTok — as highlighted by Airbnb CEO Brian Chesky — Airbnb customers are sharing screenshots of bookings they have made but have no intention of using.  Airbnb has confirmed that the company has suspended “all guest and host fees on all bookings in Ukraine at this time.” The chief executive of the room booking platform also said on March 4 that “all operations” in Russia and Belarus will be suspended.  When attempting to make a test booking in both countries, Airbnb’s page said the “listing’s calendar is blocked and they aren’t accepting bookings right now.” Separately, Airbnb is working with hosts to provide free living accommodation for up to 100,000 refugees fleeing Ukraine, with rooms funded by Airbnb, Airbnb.org donors, and hosts themselves. Airbnb says it has assisted in providing temporary housing to over 54,000 refugees in the past five years, connected to conflicts in areas including Syria, Afghanistan, and Venezuela. At the time of writing, the United Nations (UN) estimates that over one million residents have fled Ukraine, the majority of which have crossed the border into Poland. Neighboring countries including Hungary, Moldova, Slovakia, and Romania have also accepted refugees.  It should be noted, however, that scammers and fraudsters will seek every opportunity they can to cash in, and they may try to use the Airbnb booking campaign and its good intentions to their advantage.  The UN Refugee Agency (UNHCR) has provided online resources for refugees and there are more traditional methods to assist, with organizations including the Red Cross appealing for donations.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 source

Airbnbs are being booked in grassroots campaign to support Ukraine locals Read More »

The Intertwining Digital Economy

The digital economy is frequently regarded as a beacon of innovation and growth, both influencing and being influenced by ICT spending. Today, there is increasingly more impact on the economy from the digital world. However, the digital economy itself is a complex ecosystem, influenced by countless macroeconomic factors that shape its development and trajectory. There are global economic trends in inflation and overall technology, “background” developments in raw material supply chains that influence technology hardware procurement, and ongoing, fast-paced advancements in emerging technology. Understanding these influences is crucial for uncovering the full picture of the digital economy’s potential and challenges.   Understanding the economic impact of new technologies and quantifying it is not immediate. IDC’s Worldwide Digital Economy Strategies Program, in collaboration with IDC’s Data & Analytics team, developed a Digital Economic Impact model over the years and recently applied that to the key technology of the moment: artificial intelligence (AI). We chose AI, as it is not only on everyone’s mind, but is also a paradigm shift that’s reshaping industries, economies, and societies at an unprecedented pace. As we explore the macroeconomic factors influencing the digital economy it becomes clear that AI is both a product of these factors and a key driver of change within this dynamic landscape.   According to our model we found that Business AI (consumer excluded) will contribute $19.9 trillion to the global economy and account for 3.5% of GDP by 2030. You can read the report or press-release for full details, but how did we calculate this? Our economic impact analysis leveraged data from our Spending Guide and other sources to help understand both the immediate impacts of AI spending and the interaction with broader economic forces at play which we explain below.  Understanding How AI Impacts the Economy: Economic Impact Models  As aforementioned, AI will account for 3.5% of GDP by 2030. To estimate the overall impact of a technology product or service, IDC developed an economic impact methodology that combines IDC’s knowledge of the market and internal data with a standard analytical framework that leverages the most updated countries input-output (I/O) tables.   In brief, the economic impact of AI can be sub-categorized into direct, indirect, and induced effects.  Direct Effects Direct effects refer to the income generated by providers of artificial intelligence solutions or services from their direct sales to customers. In other words, it is the revenue of solutions/services providers when directly selling their products to end users. Essentially it is the revenue of an AI vendor when selling their solutions or services. As a concrete example let’s take the case of a company that develops and sells AI-driven customer service chatbots. When this company successfully sells its chatbot solutions to online retailers, that revenue generated from these sales represents the direct economic impact of AI. Indirect Effects Indirect effects involve the economic impact related to the AI supply chain and the advantages gained by entities that adopt AI, such as enhancements in productivity and revenue growth. This category also includes the influence that organizations or technology providers exert on a regional or national level through their AI-related operations. Indirect effects are further divided into “backward” and “forward” categories. Backward indirect effects refer to the economic effects on supply chains and industries that provide inputs to AI-driven sectors, in other words, revenues generated in local industries impacted by AI. Forward indirect effects refer to effects on AI adopters that benefit from the adoption of AI technology in terms of productivity, revenue growth, and other business parameters. More concretely, backwards indirect effects include all inputs supplied to AI solutions from the backend: including PCs, chips, computing, colocation datacenter operators, energy providers, internet providers, and more. On the forward effects side, this includes concretely any increase in revenue coming from different factors such as the introduction of enhanced products or services, improvements in production and sales processes, or gains in customer acquisition that result from the implementation of AI. Induced Effects Induced effects stem from increased household income due to AI-related activities, leading to higher consumer spending and broader economic benefits. These are secondary effects, referring to economic stimulus coming from increased household income, including existing and new employees linked to the AI value chain across direct and indirect effects layers. People will spend part of their new wages in the economy, thus generating additional economic impact. For example, let’s take a manufacturing company with an ambitious AI strategy that has installed a dedicated AI team, hired specialists, etc. This company may pay higher salaries to this AI team due to the increased demand and profitability of AI products. As these engineers receive higher incomes, they have more disposable income to spend on goods and services within their community, perhaps buying a car or dining out more frequently. These purchases inject additional money into the local economy, benefiting various sectors such as the automotive industry, restaurants, and construction businesses. It serves as a “ripple effect” of increased consumer spending stemming from AI-related economic activities. Things To Watch Out For These numbers, however, do not mean the journey from investment to monetization and economic impact is straightforward. In the case of AI, many companies are starting to question which use cases truly add value, and we are also seeing that regulation and questions about the ethical use of AI are increasingly important topics. From our Global Future Enterprise Resiliency & Spending Survey, tech decision makers (IT and LoB) reported an overage of 37 GenAI PoCs in the last 12 months, with only 5 making it into production, on average. Out of these 5, they reported a 68% success rate. That means a lot of PoCs failed, a testament to the long road ahead for AI’s real impact. While it is true that AI doesn’t necessarily guarantee immediate returns, AI’s economic impact will play out over time as the market matures. It is also crucial to keep this long-term perspective in sight while making executive decisions on implementation and deployment. Going Forward The interplay between AI

The Intertwining Digital Economy Read More »

LLMs can’t outperform a technique from the 70s, but they’re still worth using

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More This year, our team at MIT Data to AI lab decided to try using large language models (LLMs) to perform a task usually left to very different machine learning tools — detecting anomalies in time series data. This has been a common machine learning (ML) task for decades, used frequently in industry to anticipate and find problems with heavy machinery. We developed a framework for using LLMs in this context, then compared their performance to 10 other methods, from state-of-the-art deep learning tools to a simple method from the 1970s called autoregressive integrated moving average (ARIMA). In the end, the LLMs lost to the other models in most cases — even the old-school ARIMA, which outperformed it on seven datasets out of a total of 11. For those who dream of LLMs as a totally universal problem-solving technology, this may sound like a defeat. And for many in the AI community — who are discovering the current limits of these tools — it is likely unsurprising. But there were two elements of our findings that really surprised us. First, LLMs’ ability to outperform some models, including some transformer-based deep learning methods, caught us off guard. The second and perhaps even more important surprise was that unlike the other models, the LLMs did all of this with no fine-tuning. We used GPT-3.5 and Mistral LLMs out of the box, and didn’t tune them at all. LLMs broke multiple foundational barriers For the non-LLM approaches, we would train a deep learning model, or the aforementioned 1970’s model, using the signal for which we want to detect anomalies. Essentially, we would use the historical data for the signal to train the model so it understands what “normal” looks like. Then we would deploy the model, allowing it to process new values for the signal in real time, detect any deviations from normal and flag them as anomalies. LLMs did not need any previous examples But, when we used LLMs, we did not do this two-step process — the LLMs were not given the opportunity to learn “normal” from the signals before they had to detect anomalies in real time. We call this zero shot learning. Viewed through this lens, it’s an incredible accomplishment. The fact that LLMs can perform zero-shot learning — jumping into this problem without any previous examples or fine-tuning — means we now have a way to detect anomalies without training specific models from scratch for every single signal or a specific condition. This is a huge efficiency gain, because certain types of heavy machinery, like satellites, may have thousands of signals, while others may require training for specific conditions. With LLMs, these time-intensive steps can be skipped completely.  LLMs can be directly integrated in deployment A second, perhaps more challenging part of current anomaly detection methods is the two-step process employed for training and deploying a ML model. While deployment sounds straightforward enough, in practice it is very challenging. Deploying a trained model requires that we translate all the code so that it can run in the production environment. More importantly, we must convince the end user, in this case the operator, to allow us to deploy the model. Operators themselves don’t always have experience with machine learning, so they often consider this to be an additional, confusing item added to their already overloaded workflow. They may ask questions, such as “how frequently will you be retraining,” “how do we feed the data into the model,” “how do we use it for various signals and turn it off for others that are not our focus right now,” and so on.  This handoff usually causes friction, and ultimately results in not being able to deploy a trained model. With LLMs, because no training or updates are required, the operators are in control. They can query with APIs, add signals that they want to detect anomalies for, remove ones for which they don’t need anomaly detection and turn the service on or off without having to depend on another team. This ability for operators to directly control anomaly detection will change difficult dynamics around deployment and may help to make these tools much more pervasive. While improving LLM performance, we must not take away their foundational advantages Although they are spurring us to fundamentally rethink anomaly detection, LLM-based techniques have yet to perform as well as the state-of-the-art deep learning models, or (for 7 datasets) the ARIMA model from the 1970s. This might be because my team at MIT did not fine-tune or modify the LLM in any way, or create a foundational LLM specifically meant to be used with time series.  While all those actions may push the needle forward, we need to be careful about how this fine-tuning happens so as to not compromise the two major benefits LLMs can afford in this space. (After all, although the problems above are real, they are solvable.) This in mind, though, here is what we cannot do to improve the anomaly detection accuracy of LLMs: Fine-tune the existing LLMs for specific signals, as this will defeat their “zero shot” nature. Build a foundational LLM to work with time series and add a fine-tuning layer for every new type of machinery.  These two steps would defeat the purpose of using LLMs and would take us right back to where we started: Having to train a model for every signal and facing difficulties in deployment.  For LLMs to compete with existing approaches — anomaly detection or other ML tasks —  they must either enable a new way of performing a task or open up an entirely new set of possibilities. To prove that LLMs with any added layers will still constitute an improvement, the AI community has to develop methods, procedures and practices to make sure that improvements in some areas don’t eliminate LLMs’ other advantages.   For classical ML, it took almost 2

LLMs can’t outperform a technique from the 70s, but they’re still worth using Read More »

The Business Value of Transparent Carbon Emissions Data

Carbon reporting has been much in the news over the past several years, with organizations under increasing pressure to track and disclose their carbon emissions. IDC recently published a new document that offers a holistic assessment of the rapidly evolving landscape of carbon accounting and management platform vendors that are serving growing need for accurate, data driven emissions calculations and decarbonization strategy analysis. As the global regulatory environment for climate risk and sustainability reporting continues to develop, organizations are under increasing pressure to report on their corporate emissions as well as their strategy for decarbonization.  Organizations are also experiencing mounting pressure from customers and business partners for transparent carbon emissions data, indicating a business value impact in lost revenue opportunities associated with non-cooperation.  This is creating new demands on software platforms to support robust emissions calculations as well as provide analytical support to optimize decarbonization pathways.  Key Trends Representative of Carbon Management Software Capabilities The mounting organizational regulatory and business value impact of robust carbon accounting and management practices is leading more enterprises to depend on purpose specific applications to support these initiatives. Some important trends that are driving carbon accounting and management platform development include: Data driven emissions calculations and reporting is elevating the importance of prebuilt connectors and APIs to source relevant data from billing systems, enterprise applications, third party sources, and IoT devices.  Centralized management of sourced data is vital to ensure consistency, accuracy, efficiency, transparency, regulatory compliance, scalability, improved decision-making and enhanced collaboration. There is a wide breadth of organizational maturity that will dictate the scale of carbon accounting that an organization undertakes.  While many organizations continue to focus primarily on internal carbon emissions, regulation is beginning to mandate value chain emissions reporting which will significantly increase the complexity of emissions calculations. Regulation is also mandating limited and eventually reasonable assurance of carbon emissions data requiring solutions that are highly auditable with transparent calculation methodologies and robust data verification capabilities. As the complexity of carbon emissions reporting escalates, sustainability teams, often small in number, are becoming overburdened with routine tasks, which impinges on their availability for more strategic initiatives.  An important aspect of carbon management platforms will be workload offset enabled through automation and AI driven capabilities. Regulatory and stakeholder requirements are extending expectations from reporting of historical emissions to forward looking decarbonization initiatives and goal management.  Supporting these requirements will necessitate advanced strategy, analytical and scenario planning features. Sustainability is increasingly extending beyond an organization’s sustainability team as initiatives are incorporated into organization KPIs and values.  Carbon management platforms therefore should incorporate communication, collaboration and project management features that will foster cross team communication. Expectations for corporate carbon emissions reporting and management are rapidly advancing.  Organizations are looking to software vendors to provide not only a robust platform but are also looking to these vendors for thought leadership, education resources and support. In an era of escalating environmental scrutiny, mastering carbon accounting is not just compliance, but a strategic imperative for future-proofing businesses. The 2024 IDC MarketScape for Worldwide Carbon Accounting and Management Applications evaluates 18 software vendors across 29 scoring criteria categories, including 18 capability categories and 11 strategy capabilities. Software vendors included in the evaluation offer a carbon management software solution that is either a stand-alone product or a component of a broader platform. Vendors had to meet a minimum threshold of total employees as well as operate at a global scale (defined as having operations in at least two regions of the following regions: North America, South America, EU, APAC, Africa). This research includes the analysis of eighteen sustainability software vendors including Acuity, Cority, EnergyCAP, FigBytes, GE Digital, Honeywell, IBM, Microsoft, Nasdaq, Normative, Persefoni, Plan A, SAP, Salesforce, Sphera, Sweep, UL Solutions and Watershed, who are positioned in the leaders and major players categories. The analysis identified that all 18 of the vendors have a strong carbon management solution, but some offer a more advanced solution set as well as a more innovative roadmap compared to others. The carbon management software landscape has experienced explosive growth over the past five years which while providing better optionality, also presents organizations with increasingly complex choices in vendor selection. This MarketScape is meant as a guide to help organizations evaluate software vendor platforms in order to identify the best platform for their organization for today as well as tomorrow’s requirements. It provides a comprehensive analysis of carbon management platforms, highlighting the increasing need for organizations to track, manage, and report carbon emissions amid evolving regulatory landscapes and stakeholder pressures. It evaluates vendors based on their capabilities and strategies to meet future customer needs, focusing on innovation, customer satisfaction, and the ability to support organizations in their decarbonization efforts. source

The Business Value of Transparent Carbon Emissions Data Read More »

3. How Asian Americans see the U.S. immigration system

With more than half of Asian Americans born outside the United States, a share that rises to 67% among Asian American adults, engagement with the U.S. immigration system is a common experience. Asian American immigrants interact with the nation’s immigration system in different ways. Some Asian immigrants came to the U.S. under differing visa categories, including student visas and temporary work visas. Others obtained permanent residencies through family sponsorship, employment-based preferences, and diversity and refugee categories, among others. Asian immigrants’ engagement with the U.S. immigration system in numbers Some 13 million Asian immigrants live in the United States, making up 32% of legal immigrants and 16% of unauthorized immigrants among the foreign-born population in the U.S. in 2022, according to a Center analysis of the American Community Survey. About one-third of those obtaining lawful permanent residency (i.e., people getting a “green card”) in 2022 were born in Asia, according to an analysis of data from the Department of Homeland Security. Among those admitted under employment-based preferences, more than 60% were born in Asia; the largest numbers were from India, China and the Philippines. Among refugees and people granted asylum in 2022, about a quarter were born in Asia. Large numbers of people from Asia are admitted each year as lawful temporary migrants to work or study in the U.S. In 2022, about 70% of arrivals of temporary workers in specialty occupations (H-1B visas) were born in Asia; roughly two-thirds (64%) of arrivals on H1-B visas were granted to immigrants from India. About one-sixth of temporary managers (L-1 visas) were from Asia. Among international students arriving on F-1 visas, more than 40% were from Asia. Nearly 20% of all students arriving on F-1 visas are from India, more than 10% from China and about 5% from Korea. Today, 60% of Asian immigrants are citizens. Another 28% are in the country legally as lawful permanent residents (20%) or temporary lawful immigrants (8%). And 13% are in the country without authorization, according to Pew Research Center estimates of the 2022 American Community Survey. This chapter explores how Asian Americans’ views of the U.S. immigration system are linked with their diverse backgrounds. It also examines how U.S.-born Asian Americans see the U.S. immigration system and immigration policy goals. Do Asian immigrants think the U.S. immigration system needs to change? Overall, 59% of Asian immigrants say the U.S. immigration system needs to be completely changed or needs major changes. Views vary by factors such as ethnicity and the main reason for immigrating. Main reason for immigrating: About six-in-ten immigrants who came to the U.S. for educational or economic opportunities say the immigration system needs large changes, while about half of those who came to be with family say the same (53%). Ethnicity: 70% of Indian immigrants say the U.S. immigration system needs complete or major changes, a higher share than among other ethnic groups. Political party: Notably, views don’t vary by party among Asian immigrants. Republicans (61%) and Democrats (60%), including those who lean to each party, are equally likely to say the system needs complete or major changes. Among U.S.-born Asian American adults, 73% say the immigration system needs to be completely changed or major changes, a higher share than among Asian immigrants (59%). Still, large majorities of both are critical of the U.S. immigration system. On the other hand, 25% of the U.S. born say the system needs minor or no changes, while 39% of immigrants say the same. What U.S. immigration policy goals are important to Asian immigrants? The survey, conducted between July 2022 and January 2023, asked Asian American adults about their views on specific immigration policy goals. Among Asian immigrants: 86% say encouraging more highly skilled individuals to migrate and work in the U.S. is a very or somewhat important goal. 82% say making it easier for U.S. citizens or legal residents to sponsor a family member to immigrate to the U.S. is important. 76% say establishing stricter policies to prevent people from overstaying their visas is an important goal. 73% say allowing immigrants who came to the country illegally as children to remain in the U.S. and apply for legal status is an important policy goal. 64% say creating a way for most immigrants currently in the country illegally to stay here legally is an important goal. 62% say increasing deportations of immigrants currently in the country illegally is an important goal. By political party Among Asian immigrants, Democrats are more likely than Asian immigrant Republicans to prioritize U.S. immigration policy goals that encourage immigration: 86% of these Democrats and Democratic leaners say making it easier for U.S. citizens or legal residents to sponsor a family member to immigrate to the U.S. should be an important policy goal, compared with 78% of Asian immigrant Republicans and leaners. 83% say allowing immigrants who came to the country illegally as children to remain in the U.S. and apply for legal status is important, compared with 60% of Asian immigrant Republicans.  73% say creating a way for most immigrants currently in the country illegally to stay here legally should be an important goal, compared with 51% of Asian immigrant Republicans. Meanwhile, Republicans are more likely than Democrats to say goals that restrict illegal immigration are important for U.S. immigration policies: Most (83%) say establishing stricter policies to prevent people who enter the country legally from overstaying their visas is an important U.S. immigration policy goal. A smaller majority of Asian immigrant Democrats (73%) say this. Republicans in this group are also much more likely than Democrats to favor increasing deportations of immigrants in the country illegally (82% vs. 50%). Notably, among Asian immigrants, only one policy goal received bipartisan support: encouraging more highly skilled individuals from around the world to immigrate and work in the U.S. (84% of Republicans and 87% of Democrats view this policy goal as important). By ethnicity Asian immigrants’ views also vary across ethnic groups. For example: Chinese immigrants (69%) are less likely than some other ethnic

3. How Asian Americans see the U.S. immigration system Read More »

Let’s Debunk Some Application Threat Modeling Myths!

Application threat modeling has gotten a bad rap over the years. Security leaders looking to implement application threat modeling with their product teams must contend with stakeholders who see it as nothing more than a compliance checkbox and previous iterations that were overly formalized and heavyweight. As security pros sort through the conflicting frameworks and approaches to find an application threat modeling approach that is effective, efficient, and repeatable, they must also unpack their own biases about what makes a good threat model. While researching my latest report, Build A Business Case For Application Threat Modeling, I spoke with security practitioners who helped clarify and debunk some of the most common misconceptions around application threat modeling. Here are three of them: Myth: You must use a threat modeling framework. STRIDE and DREAD are the best-known threat modeling frameworks, with PASTA, VAST, LINDDUN, and others less well known — but familiarity does not equal adoption. Most of the people we interviewed did not use any of the standard frameworks, instead preferring whiteboarding, discussion, decision trees, or a more lightweight conversation based around understanding how a specific application functions. Even the authors of the “Threat Modeling Manifesto” declined to recommend a framework, describing their guidance as “methodology-agnostic.” Frameworks can have their uses, however, such as when your threat modeling initiative is led by less experienced security personnel or by developers who are new to security; in that case, a formal framework will provide guidance and structure. But don’t shoehorn in a framework. You can meet the goals of threat modeling without one. Myth: You must conduct threat modeling differently for different types of applications. Whether you are modeling a monolithic application, a set of APIs, an internet-of-things device, an application deployed in the cloud, or an application deployed on-premises, the security practitioners we spoke with agreed that the threat modeling structure and process is the same. That’s good news for security leaders, who can apply a single threat modeling approach across all product teams. The most important questions asked during threat modeling — What does the product do? What data does it handle? What can go wrong? What can we do about it? — are architecture-, form factor-, and deployment-agnostic. The answers to those questions will vary depending on application type, but that doesn’t change how you conduct the threat modeling exercise. Myth: If the threat model doesn’t identify every threat, the process is flawed. If you don’t set expectations around threat modeling’s goal, perfect can become the enemy of good. As with many security tools and processes, there can be an absolutist expectation that threat modeling will find every possible threat that will ever exist. The practitioners we spoke with stressed that threat modeling is about making the product better. Instead of labeling threat modeling a failure if it doesn’t find everything, use threat modeling as a “defense in depth” layer that helps identify and mitigate key security concerns early in the product lifecycle. For more on this subject, please check out my latest report, Build A Business Case For Application Threat Modeling, or set up an inquiry or guidance session to discuss further. Also, if you will be attending Forrester’s Security & Risk Summit, please join me for my session, “‘The Not-So-Premature Burial’: Rethinking Application Threat Modeling” which is part of the Cloud & Application Security track at the summit. source

Let’s Debunk Some Application Threat Modeling Myths! Read More »

Lidwave raises $10M to improve machine vision with on-chip 4D LiDAR

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Lidwave has raised $10 million to make machine vision better when it comes to spotting pedestrians in a busy landscape or a robot in a factory being able to see better. The technology is dubbed 4D-LiDAR, and Lidwave is working taking complex LiDAR sensors and putting them on a chip, said Yehuda Vidal, Lidwave’s CEO, in an interview with GamesBeat. Jumpspeed Ventures and Next Gear Ventures led the round, with strategic investment from a leading Swedish truck manufacturer. The investment emphasizes the significance of Lidwave’s technology and approach in advancing the future of machine vision. Lidwave will use the new funding to further develop its optical chip, launch the industry’s first software-definable 4D LiDAR sensor, and expand its market presence. “This investment marks a significant milestone for Lidwave, propelling us closer to our goal of revolutionizing machine vision,” said Vidal. “Our 4D LiDAR chip not only sets a new standard for sensor performance but also makes advanced perception technology accessible to the mass market. We are thrilled to have the support of visionary investors who share our mission to enhance safety and productivity across various industries.” The challenge Lidwave is putting 4D LiDAR components on a single chip. Sensors with machine vision are critical across many industries. And there is a consensus that LiDAR sensors (Light Detection and Ranging) are essential for autonomous machines across various fields. LiDAR is a remote sensing technology that uses a laser to measure distances and create 3D models of the space near the sensor. A LiDAR system emits a laser pulse, which reflects off objects and is detected by a receiver. The time it takes for the light to return is used to calculate the distance to the object. And so it can be used to map the space in front of a LiDAR-equipped car. However, its full potential remains untapped due to high costs, complexity, and reliability issues. Legacy LiDAR systems are complicated, comprising dozens of elements including arrays of lasers, detectors, and optical components, assembled through a complex and costly process. This results in high-end LiDAR units costing thousands (sometimes tens of thousands) of dollars, limiting widespread adoption across industries ranging from automotive, transportation, traffic management, industrial automation, ports to railways. Lidwave’s answer Lidwave is trying to take LiDAR to the mass market with small chips. Lidwave addresses these challenges with its novel technology, marking a new era: LiDAR 2.0, an affordable system-on-chip LiDAR designed for the mass market. Lidwave’s proprietary Finite Coherent Ranging (FCR) technology integrates all critical components onto a single chip, simplifying production and drastically reducing costs. FCR allows Lidwave to integrate key components onto a single chip by treating light as a wave, rather than using traditional photon counting. This approach allows for precise measurement of both range and velocity while offering high-resolution data that helps systems understand their surroundings with greater clarity and provide immunity to external interference. By combining lasers, amplifiers, receivers, and optical routing onto one chip, Lidwave not only reduces production costs but also makes this powerful technology more accessible and reliable for a wide range of industries. Moreover, unlike conventional LiDARs, Lidwave’s coherent sensing method provides Doppler (velocity) data at the pixel level alongside depth information, enabling machines to perceive and understand their surroundings with unmatched clarity, leading to better-informed decisions. Origins Lidwave’s founders (left to right): Yossi Kabessa, Uri Weiss and Yehuda Vidal. Vidal cofounded Lidwave in 2021 with Yossi Kabessa (CTO) and Uri Weiss (chief scientist) in Jerusalem. The company has less than 20 people. “Our core knowledge is in coherent optics. It’s a regime of optics that utilizes quantum phenomena to use with light for imaging purposes. We saw that LiDAR is a very complex machine that costs tens of thousands of dollars for a high-end system,” Vidal said. The variety of LiDAR sensors is wide, from small ones in smartphones for face recognition to long-range models that can detect more than 100 meters for cars. Since it’s based on a laser, it has optical components that are not so easily converted to silicon chips. Lidwave is a fabless chip company, meaning it designs chips and has them fabricated by contract chip manufacturers. Sensors for cars and robots need to see better. “We have more than 10 years expertise in the specific domain of coherent optics, which allows us to do this on a chip,” Vidal said. The 4D refers to time, or the fourth dimension, which means capturing spatial data over time for something like a moving car. The sensor can thus use Doppler tech to capture information like velocity. With this additional data, the sensor can clean up an image. It is in higher resolution, and you can figure out with blue data if an object is coming toward you. If it’s red, it is moving away from you, based on a demo Vidal showed me. Lidwave’s own name means that it can focus on coherent light and measure the wave of light, as opposed to particles. That helps extract velocity and depth. “This is the fourth dimension that we provide,” he said. “We still use the light, but we use it differently.” The applications range from self-driving cars to industrial automation or smart cities, as it’s very useful to figure out the status of a moving object in many different scenarios. Investor interest Lidwave is designing LiDAR for a single chip. “We recognized the potential of LiDAR technology many years ago, but only now, with Lidwave, there is a clear pathway to scalability and wide adoption,” said Ben Wiener, founding partner at Jumpspeed Ventures, in a statement. “Lidwave’s revolutionary 4D chip overcomes the barriers of legacy LiDARs, reducing the complexities and costs associated with their deployment. We pride ourselves on investing in cutting-edge technologies that are positioned to fundamentally transform industries, and with this in mind, we look forward to the impact Lidwave will make.” Lidwave’s seed

Lidwave raises $10M to improve machine vision with on-chip 4D LiDAR Read More »

1. How closely are Americans following election news, and what are they seeing?

About seven-in-ten Americans surveyed in September (69%) say they are following news about the presidential candidates for the 2024 election very (28%) or fairly (40%) closely. More people say they are tuning in to election news as Election Day gets closer. In April, 58% of U.S. adults said they were following the election at least fairly closely, and by July, that number had risen to 65%. Attention in 2020 also increased closer to that election. A survey conducted in late August and early September 2020 found that 66% of Americans said they were very or fairly closely following news about candidates Joe Biden and Donald Trump, while in late September and early October 2016, 74% of respondents were following news about Trump and Hillary Clinton. This year, the rise in attention to the election has been driven by Democrats. While Republicans and independents who lean toward the GOP were somewhat more likely than Democrats and Democratic leaners to be following the election at least fairly closely in April and July, the two parties are now about equally likely to say they are following news about the candidates very or fairly closely (70% vs. 71%, respectively). The July survey was conducted July 1-7, before Biden announced his withdrawal as the Democratic candidate on July 21. On Aug. 5, Vice President Kamala Harris was confirmed as his replacement. Older Americans are paying much closer attention to election news than are younger adults, mirroring patterns in overall attention to news. About half of U.S. adults ages 18 to 29 (53%) say they are following news about the candidates at least fairly closely, compared with 85% of those ages 65 and older. And older adults are nearly four times as likely as Americans under 30 to say they’re following election news very closely (46% vs. 12%). The 2024 campaign events that Americans have heard or read about most In a 2024 presidential campaign season that has seen a number of major and dramatic events, three of them stand out in terms of the public’s exposure to that news. Fully 70% of U.S. adults say they have heard or read a lot about Harris replacing Biden as the Democratic presidential candidate. Close behind is the July 13 assassination attempt on former President Donald Trump during a Pennsylvania rally, with 66% saying they have heard a lot about that. (The survey questions were finalized before the second assassination attempt on Trump in September.) Finally, reinforcing reports of a large viewing audience, 64% of Americans say they heard a lot about the Sept. 10 ABC debate between Trump and Harris. Much smaller shares say they have heard or read a lot about several other topics mentioned in the survey. These include the vice presidential candidates, Republican JD Vance (36%) and Democrat Tim Walz (32%); the Democratic (29%) and Republican (24%) National Conventions; and third-party candidate Robert F. Kennedy Jr. endorsing Trump when he withdrew from the race (22%). Still, large majorities say they have heard at least a little about each of these topics. Similar shares of the two parties say they have heard or read a lot about the first attempted assassination of Trump in July. But on each other campaign topic measured by the survey, there are partisan differences in how much people have heard. For instance, Democrats are more likely than Republicans to say they have heard or read a lot about Harris replacing Biden as the nominee (76% vs. 67%). And the gap is larger when it comes to the debate between Harris and Trump, with 72% of Democrats saying they heard a lot about it, compared with 58% of Republicans. Democrats also are more likely than Republicans to have heard a lot about not only Walz (41% vs. 25%) but also Vance (41% vs. 34%). Four-in-ten Democrats say they heard or read a lot about the Democratic National Convention, compared with 21% of Republicans who say the same. Republicans are more likely than Democrats to say they heard a lot about the Republican National Convention, but the gap is smaller (29% vs. 20%). Republicans are modestly more likely than Democrats to say they have heard or read a lot about Kennedy endorsing Trump when he dropped out of the race (27% vs. 19%). What Americans want in campaign coverage – and what they actually see The survey asked respondents what kinds of news about the presidential candidates they are most interested in seeing. Topping the list is news about the candidates’ stances on issues, with 75% of U.S. adults saying they are extremely or very interested in this. 60% are extremely or very interested in the candidates’ moral characters. About half are highly interested in the candidates’ career experiences and their actions and comments on the campaign trail (49% each). 42% express high levels of interest in who is leading the race. And trailing far behind, only 14% say they are extremely or very interested in the candidates’ personal lives. Democrats and Democratic-leaning independents are considerably more likely than Republicans and GOP leaners to be highly interested in the candidates’ moral characters (69% vs. 52%). The survey also asked which of these six types of election news Americans see most often, and the top areas of interest for Americans do not always line up with what they are actually seeing the most news about. By far, the leading topic seen by Americans is news about the candidates’ actions and comments on the campaign trail: 40% say they see the most news about this, even though it is not among the top two topics in terms of interest. Smaller shares say they see the most news about the candidates’ stances on issues (17%), the candidates’ moral characters (14%) or the political horse race (13%). Just 8% say the most common type of election news they see involves the candidates’ personal lives, while 3% most often see news about the candidates’ career experiences. source

1. How closely are Americans following election news, and what are they seeing? Read More »