ANZ CIO Challenges: AI, Cybersecurity & Data Analytics for 2025

CIOs across Australia and New Zealand’s public sector face a “hard 12 months” managing their technology estates amid cost constraints, according to a leading regional Gartner analyst. However, there is optimism that investments in AI will deliver the productivity gains many anticipate. Gartner recently released the public sector findings from its CIO Technology Executive Survey. The firm found that 94% of ANZ government CIOs named data analytics as their top technology investment for 2025, followed by investments in cyber security (91%) and application modernisation (85%). Dean Lacheca, a Gartner VP analyst, told TechRepublic that an austerity mindset was in play across government agencies. “Frustrated” CIOs were facing another year of relatively flat budgets when accounting for inflation, with little appetite for undertaking large ICT projects. “Right now, we’re probably in one of the leanest periods with austerity,” he said. “There is a realisation that this isn’t going to be a period of massive, accelerated adoption of technology.” Productivity rises to become a key outcome for government CIOs Lacheca said that the prioritisation of data analytics and cybersecurity investment has remained consistent in recent years. However, this reflects slow progress, with investments in these areas facing challenges, such as keeping up with the changing threat vectors plaguing cybersecurity professionals. “There’s been some great work done by the Australian federal government with their cybersecurity efforts,” Lacheca noted. “But if you look at the Essential Eight, and the movement towards the Essential Eight, it’s still … relatively slow going in that particular space.” SEE: Private sector tech investment to be led by cybersecurity in Australia in 2025 The steady focus on data analytics and cybersecurity is now being joined by a growing desire for “human capital effectiveness,” as 94% of CIOs surveyed by Gartner prioritise productivity-driven outcomes, up significantly from last year. The productivity push comes as IT and the rest of the government are driven to increase efficiency. “We see a real contrast,” Lacheca added. “We see some of the high profile government projects and efforts and modernisation continue to be funded; but if you look at overarching IT investment across governments, they [CIOs] have been really hard pressed in the last few years.” More Australia coverage Government CIOs name AI among their top three technologies According to the Gartner report, the top three technologies ANZ government CIOs said they have deployed or plan to deploy in the next 12 months are: Industry cloud platforms (59%). Generative AI (56%). Low-code/no-code platforms (53%). Industry cloud platforms Lacheca said the prominence of industry cloud platforms reflects a shift towards adopting more common platforms across the public sector. Although there isn’t significantly differentiated technology, such as more advanced industry clouds for financial services, this shift promotes greater standardisation across agencies. SEE: How a sovereign cloud boom is happening In APAC AI and generative AI Lacheca said the “surprisingly high levels of interest in AI” seen in government are primarily driven by the hope that it can help with agencies’ productivity demands. However, after a period of hype around generative AI, CIOs have become more realistic about the implementation challenges. Although CIOs want AI to deliver more productivity, Lacheca said their current role is often one of a “risk mitigator” in that space. “They have to be the ones that are slowing that process down, because they are trying to make sure we are approaching this with a balanced risk perspective,” he said. SEE: Generative AI could be source of costly mistakes for tech buyers Low code and no code According to Lacheca, low code has been a big growth area in recent years. He said the main reason for this is that government CIOs were trying to “head off the mistakes of the past,” which created a “whole heap of bespoke legacy technology,” causing a problem they have to deal with now. Low-code platforms can also assist IT teams in bridging talent gaps, he said: “It’s very hard to find IT professionals in specific technologies. So they look at low code as an opportunity for them to maybe bridge some gaps in terms of talent, where they can leverage their own internal skills.” CIOs to continue to champion the value of IT Aside from managing technology risk (82%), CIOs’ biggest priority is demonstrating IT’s business value in government (68%). Lacheca said ANZ government CIOs were still trying to change the “corporate services-type mindset” that comes from IT’s past. “They’re still really trying to educate or communicate the value that they contribute to the organisation itself,” he said. Looking ahead to 2025, Lacheca said he hopes there will be a continued shift in government mindset around what legacy technology is really costing. He said there is much fear around undertaking large IT investments and projects, which can lead to governments “kicking the can down the road.” “I think there’s going to have to be a level of ownership around how we start to mitigate that risk, and how do we do that in a way that we actually create some of this productivity gain,” Lacheca explained. “There’s a real business case for the gains we will get if we start to relieve some of the the legacy [technology] that we have.” source

ANZ CIO Challenges: AI, Cybersecurity & Data Analytics for 2025 Read More »

Georgia Tech joins Apple’s new silicon engineering initiative

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Georgia Tech has joined Apple’s initiative aimed at preparing students for careers in hardware technology, computer architecture and silicon chip design. Georgia Tech, based in Atlanta, Georgia, said that its electrical and computer engineering students will now benefit from an expanded tapeout-to-silicon curriculum and have access to Apple engineers to better prepare for a career in hardware engineering. Let’s ignore the possibility that such jobs may be eliminated by AI in the future. For now, they are extremely skill-intensive and it’s been hard to attract enough American students to pursue these careers in recent years. This kind of program has to happen if we’re to achieve the political aim of being able to design and manufacture technology products on American shores. The Georgia Tech School of Electrical and Computer Engineering (ECE) is expanding its collaboration with Apple by joining the company’s New Silicon Initiative (NSI). As part of the Apple NSI program, ECE students will receive various types of support to enhance their skills in microelectronic circuits and hardware design. This includes scholarship and fellowship opportunities, along with expanded coursework for both undergraduate and graduate students. Additionally, students will have the opportunity to connect with Apple engineers through mentorships, guest lectures, and networking events. Georgia Tech has 2,500 computer science students. The expanded curriculum support will benefit integrated circuit (IC) design and tapeout-to-silicon courses that enable students to prepare for a career in hardware engineering across different focus areas, including circuit technology, electronic devices, and computing hardware and emerging architectures. “Working with Apple as part of its New Silicon Initiative allows us to bridge the skills gap for a workforce in IC design and computer architecture by preparing students with the technical abilities and skills to enter a rapidly evolving, always in-demand industry,” said Arijit Raychowdhury, professor and chair of ECE at Georgia Tech. “Offering students the ability to learn directly from Apple engineers gives them a leg up and helps them gear up for the next chapter of their careers. We’re grateful and excited to expand our partnership with Apple to offer students unique learning opportunities.” Apple engineer and Georgia Tech graduate Fernando Mujica addressing Georgia Tech students. Apple engineers will work closely with ECE faculty members to present guest lectures across a range of integrated system design courses. The engineers will also participate in reviews for projects in several IC design courses and provide practical feedback to help students improve their designs throughout the tapeout process. “We’re thrilled to bring the New Silicon Initiative to Georgia Tech, expanding our relationship with its School of Electrical and Computer Engineering,” said Jared Zerbe, director of hardware technologies at Apple, in a statement. “Integrated circuits power countless products and services in every aspect of our world today, and we can’t wait to see how Georgia Tech students will help enable and invent the future.” As part of the NSI program, graduate students can pursue Apple Ph.D. fellowships, including a Ph.D. Fellowship in Integrated Circuits and Systems announced this October. The expanded collaboration between Apple and ECE builds upon the 2022 launch of a digital circuit design course introduced with Apple’s support to offer undergraduate students a hands-on theory-to-tapeout course for very large-scale integrated (VLSI) digital circuits. Apple launched NSI in 2019 and expanded its effort to include several HBCU Colleges of Engineering in 2021. Georgia Tech is now the eighth university to be part of the program, giving students access to cutting-edge technologies and world-class experts. Apple and ECE held a kick-off event at Georgia Tech last month to share the NSI news with students. During the event, Apple experts and ECE faculty members highlighted how the program will be integrated into the School’s hardware curriculum. Over 600 students attended, enjoying networking opportunities, burritos, and bubble tea. You can view the event photos here. To learn more about the Georgia Tech School of Electrical and Computer Engineering, visit . For more information about the Apple NSI at ECE, visit https://ece.gatech.edu/apple-new-silicon-initiative-nsi. As a leading technological university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the nation, conducting more than $1 billion in research annually for government, industry, and society. More than 2,500 students are enrolled in ECE. source

Georgia Tech joins Apple’s new silicon engineering initiative Read More »

Media Scale And Media Skill Collide In The Full-Funnel Agencies Of The Future

Media continues to be a flywheel for growth and integration, especially when marketers and providers work together to combine expertise and capabilities into a full-funnel marketing strategy. Crucial to that success is uniting the scale of trading, technology, and execution with the skill of planning, intelligence, and optimization. Marketers are ready to move beyond choosing between full-service media generalists wielding buying power and agnostic execution or channel specialists offering subject matter expertise for search, social media, or performance marketing. Nearly half of US marketing executives prefer a single agency partner to provide brand and performance media assignments. They want fewer partners to manage, reduced fees, and better marketing experiences. Providers are responding with offerings that integrate precision with persuasion, intelligence with content production, and technology with ingenuity to help marketers realize efficiency and growth. What characteristics should marketing executives keep in mind when evaluating their agency or looking for a provider? Look for agencies that convert buying power into buying intelligence. Paid media represents the largest portion of most marketing budgets. In the past, media billings translated to preferred rates. Today, it also provides access to performance guarantees, technology co-development opportunities, and audience insights. Today’s media management providers combine data strategy and agency audience activation platforms, transforming buying power into buying intelligence as they grow “up” funnel or add expertise. Prioritize providers that connect media insight to creative development. Technology strategy is crucial for all media management providers. Proprietary audience platforms, powered by machine learning and first-party, third-party, and transactional data sources — combined with generative AI — become marketing operating systems to guide media planning, activation, and reporting. The best media management providers use operating systems to guide creative and content development, (re-)uniting media and creative. Identify partners that offer mutually beneficial, outcomes-focused commercial models. The growing influx of technology in marketing’s and agencies’ pursuit of software solutions creates the opportunity to innovate the agency commercial model. Providers are responding with performance- and outcomes-based remuneration options. These reward co-innovation and provide mutual reward. The best providers offer a range of options, with an eye toward outcomes models to align objectives. Only hire providers that use principal media in a transparent manner. Mutual benefit, trust, and visibility must be tenets of principal media buying products and deals. Principal media is when the media management provider invests in media inventory — often at a substantial discount — and resells that inventory to clients — often with a markup that is still below the baseline cost. When executed in an open and transparent manner, principal media can offer marketers benefits such as reduced rates, exclusives, and performance guarantees while offering the provider additional margin. When executed in a nontransparent and opaque way, principal media practices raise questions about whether the provider is acting in the brand’s best interests both financially and strategically. The best providers recommend principal media solutions sparingly, are transparent in its practices, and work to educate and inform marketers about how and when to use such solutions. The Media Management Services Evaluation Showcases Changes To The Forrester Wave™ The Forrester Wave™: Media Management Services, Q4 2024, provides clients the opportunity to dive deep into the progression toward full-funnel media capabilities by evaluating 12 prominent agencies, including the global holding company agency groups and private equity-backed independent media agencies. It also debuts new innovations in experience and format. The new Wave graphic:   Declares each provider as a Leader, Strong Performer, or Contender. Forrester’s Wave graphic now shows three bands instead of four, which better highlights our calls about where vendors or providers sit in a market relative to their peers and aligns with our three-point scoring rubric. All evaluated providers in the media management services Wave are ranked as Leaders, Strong Performers, or Contenders. Showcases in-depth customer feedback. Forrester’s Wave graphic now highlights the quality and caliber of provider reference customers, as well as ongoing feedback that Forrester collects outside of the Wave process, with more prominent markers on the Wave graphic. In the media management Wave, PMG is a customer favorite in this evaluation, with Omnicom Media Group and Publicis Groupe recognized for superior customer feedback. Complements an interactive Wave digital experience. Forrester clients can now easily move from the classic Forrester Wave report to an interactive digital experience. The media management services Wave digital experience allows clients to more easily compare providers or an individual provider’s scorecard to tailor the media management Wave findings into a custom shortlist of media agencies based on specific priorities. Forrester clients can access The Forrester Wave™: Media Management Services, Q4 2024 report. If you would like to further discuss implications, please schedule a guidance session with me. Be on the lookout for my upcoming Forrester Wave evaluating marketing creative and content services in Q1 2025. source

Media Scale And Media Skill Collide In The Full-Funnel Agencies Of The Future Read More »

Navigating the complexities of security and compliance on the mainframe

As organizations look to modernize IT systems, including the mainframe, there’s a critical need to do so without sacrificing security or falling out of compliance. But that’s a balancing act that is easier said than done, especially as cybersecurity threats grow increasingly sophisticated. Malicious actors have access to more tools and plans of attack than ever before. They’re also aggressive—in 2023 alone, there were more than 3,200 data compromises in the U.S. that affected over 350 million individuals. As those threats evolve, so too do the regulations and guidelines that are established in response. These shifts mean that companies have to prioritize a number of critical capabilities like annual or quarterly penetration testing, vulnerability scanning, audit logs, systematic access controls, and much more to remain compliant. Faced with this complex task, IT leaders need to ensure they are equipped to support new technologies while also adapting to an evolving regulatory and threat landscape if they are to keep modernization initiatives moving forward. Balancing modernization in a complex regulatory landscape Modernization is essential, and organizations that put off doing so risk getting left behind. Yet, one missed configuration-based vulnerability or data loss during a migration can be catastrophic. A single cybersecurity incident can ruin a company’s reputation with corporate partners and customers. And those incidents can have far-reaching consequences that go beyond the immediate damage to IT systems, data, or operations. But for mainframe systems, what can these incidents look like? Some of the most common mainframe vulnerabilities are: Code-based vulnerabilities. These vulnerabilities account for flaws in existing code (often from third parties) that can be exploited by cybercriminals. Configuration-based vulnerabilities. These vulnerabilities stem from improper settings and configurations that can leave systems open to unauthorized access. Insider threats. These threats represent employees or contractors who intentionally or unintentionally misuse their access to mainframe systems to harm an organization. Businesses will need to identify and implement the right strategy to combat those potential vulnerabilities while also accounting for a variety of new regulations, like the EU’s Digital Operational Resilience Act (DORA) or the Payment Card Industry Data Security Standard v4.0 (PCI DSS v4.0). More and updated localized cybersecurity regulations like 23 NYCRR Part 500, created by the New York State Department of Financial Services (NYDFS), require critical consideration, in this case requiring financial services companies operating in New York to conduct annual mainframe risk assessments, perform annual mainframe penetration testing, and have robust mainframe vulnerability management programs in place, to protect customer data and the systems they rely on. Policies and regulations like these make it more important than ever for organizations to catch vulnerabilities before they become full-fledged cyber attacks. Falling out of compliance could mean risking serious financial and regulatory penalties. With the stakes so high, IT leaders need to ensure their modernization strategies are inclusive of mainframe security.  The keys to mainframe security With the understanding that mainframe security is integral to broader modernization goals, where should organizations start? There are a few important elements that should make up an effective security strategy. Mainframe security requires a great deal of attention—threats and vulnerabilities are constantly evolving. To that end, one important step organizations can take to improve their mainframe security is to designate a mainframe security architect. This role can help design and maintain a secure environment that is tailored to the business’s specific needs while also helping to identify potential risks. From a practical standpoint, one of the more important aspects of new regulations, like DORA, is the role of regular testing and scanning for vulnerabilities. Every mainframe security strategy should incorporate capabilities like code-based vulnerability scanning, regular mainframe penetration testing, regular compliance checks, point-in-time data recovery, and widespread, fully deployed, multifactor authentication (MFA). That’s where working with a trusted partner with deep expertise in mainframe modernization and security can be a game-changer. Take, for example, the security solutions offered by Rocket Software, which deliver capabilities that are tailored to the complex security and regulatory realities facing mainframe systems. Tools like the Rocket z/Assure® Vulnerability Analysis Program automatically scan and pinpoint vulnerabilities in mainframe operating system code, making it easier to keep pace with potential threats as they evolve. Similarly, Rocket® Mainframe Security Services also offers a powerful solution for organizations looking to bolster their security with services like compliance assessments, penetration testing, and conversion services, among others. For modernization initiatives to be successful, businesses need to ensure they are prioritizing security and compliance as part of that journey. Mainframe systems bring with them a unique set of requirements, but by implementing the right programs, processes, and tools businesses can stay secure, minimize disruption, and adhere to local and international regulations. Learn more about how Rocket Software can support your modernization journey without sacrificing security or compliance. source

Navigating the complexities of security and compliance on the mainframe Read More »

AT&T Questions FCC's Legal Authority Over 'Unlocking' Rule

By Nadia Dreid ( November 14, 2024, 8:38 PM EST) — AT&T has told the Federal Communications Commission that its proposal requiring mobile providers to unlock a customer’s device within 60 days of signing up won’t stand up in court…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

AT&T Questions FCC's Legal Authority Over 'Unlocking' Rule Read More »

TunnelBear VPN Review 2024: Pricing, Ease of Use & Security

TunnelBear VPN fast facts Our rating: 3.1 stars out of 5Pricing: Starts at $3.33/mo (annual)Key features: Fun, beginner-friendly interface. Annual independent security audits. Unlimited device connections. TunnelBear’s unique, bear-themed approach to its VPN service sets it apart from the more technical and standardized implementations of other VPNs. TunnelBear is available on Windows, Mac, iOS, Android, Chrome, Firefox, and Edge. It offers 5,000+ servers, unlimited device connections, and a subscription option for teams and organizations. Despite its fun design and accessibility, TunnelBear’s slower speeds and buggy app are causes for concern. In addition, its server fleet only spans 47 countries. Semperis Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Enterprise (5,000+ Employees), Large (1,000-4,999 Employees) Enterprise, Large Features Advanced Attacks Detection, Advanced Automation, Anywhere Recovery, and more ESET PROTECT Advanced Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Advanced Threat Defense, Full Disk Encryption , Modern Endpoint Protection, and more ManageEngine Log360 Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Medium (250-999 Employees), Enterprise (5,000+ Employees), Large (1,000-4,999 Employees), Small (50-249 Employees) Micro, Medium, Enterprise, Large, Small Features Activity Monitoring, Blacklisting, Dashboard, and more How much does TunnelBear VPN cost? TunnelBear’s starting paid plan costs $3.33 per month via its TunnelBear Unlimited. It has three subscription tiers: Free, Unlimited, and Teams. Only TunnelBear Unlimited has the option to choose between a monthly, one-year, and three-year subscription. Plan TunnelBear Free TunnelBear Unlimited TunnelBear Teams 1 year Free $3.33 per month (billed $39.99 for the first year) $5.75 per user, per month 3 years Free $3.33 ($120 billed once) N/A Monthly Free $9.99 per month N/A Feature differences Unlimited data Unlimited device connections City-level server selection Priority customer support Centralized team billing Admin and account manager tools Unlimited data Unlimited device connections City-level server selection Priority customer support 7-day free trial Interestingly, the monthly and three-year plans are only available to view after you click the “Get started” button for TunnelBear Unlimited subscription option priced at $3.33. Personally, I hope TunnelBear makes their additional contract options more accessible to view or at least have it at the top of its pricing page. In terms of individual pricing, TunnelBear’s Unlimited subscription is at a discounted rate of $3.33 per month for the one-year plan. If you opt for the TunnelBear Unlimited paid plan, this discounted annual rate is a steal. However, because this is a sale offer, prices may vary in the future. I would suggest trying TunnelBear’s free version before purchasing a subscription. While data is capped at 2GB and some features are omitted, the free version allows you to get a feel for TunnelBear’s desktop VPN implementation. For business owners, TunnelBear Teams is a practical option. You get all of TunnelBear’s paid features in addition to more team-based add-ons, such as centralized billing and account manager tools. Does TunnelBear VPN have a free version? Yes, TunnelBear has a dedicated free version of its VPN. Those interested in trying the service don’t have to worry about providing payment information, as you can easily download it with no strings attached. However, there are notable limitations with TunnelBear VPN’s free version compared to its paid plan. The free version has a 2GB data cap that resets every month. So, realistically speaking, free users will only be able to try out the VPN for around a day or two before data runs out. In addition, only paid users will be able to select city-designated servers on TunnelBear’s network. Free users are only able to choose from available countries (Figure A). Figure A TunnelBear VPN Free. Image: TunnelBear Despite this, TunnelBear states that their free and paid versions don’t have any difference performance-wise, making the free version a viable way to test if the VPN is right for you. SEE: TunnelBear VPN Free vs Paid: Which Plan Is Right for You? (TechRepublic) Security and privacy: Is TunnelBear VPN secure? Specs-wise, TunnelBear has all the requisites needed for a modern VPN to be considered secure. It has OpenVPN, WireGuard, and IKEv2 — the three industry-standard security protocols we want in a VPN. It also utilizes AES 256 encryption, includes a kill switch (VigilantBear), and split tunneling (SplitBear). TunnelBear has a no-logs policy, which states that it doesn’t collect any user’s browsing information, IP addresses, DNS queries, and applications used while connected to the VPN. More about Cloud Security TunnelBear also has an impressive track record of undergoing annual independent testing. Since 2016, TunnelBear has completed seven annual independent security audits. The most recent third-party audit was done in October 2023 by Cure53. In that 2023 audit, Cure53 found a total of “13 issues this year, with only 7 considered to be of medium risk or higher.” Fortunately, TunnelBear has said that “12 out of the 13 identified issues have been fixed or mitigated.” Personally, I give major props to the TunnelBear team for continuing to have its VPN tested via third-party audits. This shows a strong commitment to transparency and is a great way to build trust among its users. On the flip side, some people may be wary of TunnelBear being based in Canada, one of the members of the Five Eyes intelligence alliance, along with Australia, New Zealand, the U.K., and the U.S. If government intervention is something you’re trying to avoid, TunnelBear may not be for you. Overall, TunnelBear’s security features, transparency, and annual independent testing make it a secure and trustworthy VPN in 2024. Key features of TunnelBear VPN TunnelBear has a few key features that set it apart from other VPNs. Let’s check out four in particular that you should know about. Beginner-friendly interface Compared to other VPNs, TunnelBear has a very user-friendly, bear-themed interface. Servers are represented by tunnels, and when you connect or switch to a new tunnel, a bear will pop up (Figure B) from inside the tunnel — hence the name,

TunnelBear VPN Review 2024: Pricing, Ease of Use & Security Read More »

OCC Recovery Guidance Can Help Banks Bounce Back Better

By Max Bonici and Stephen Gannon ( November 14, 2024, 5:20 PM EST) — The Office of the Comptroller of the Currency on Oct. 21 announced finalized guidelines that require larger, insured OCC-regulated institutions to maintain and test recovery plans…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

OCC Recovery Guidance Can Help Banks Bounce Back Better Read More »

How to get started with AI agents (and do it right)

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Due to the fast-moving nature of AI and fear of missing out (FOMO), generative AI initiatives are often top-down driven, and enterprise leaders can tend to get overly excited about the groundbreaking technology. But when companies rush to build and deploy, they often deal with all the typical issues that occur with other technology implementations. AI is complex and requires specialized expertise, meaning some organizations quickly get in over their heads.  In fact, Forrester predicts that nearly three-quarters of organizations that attempt to build AI agents in-house will fail.  “The challenge is that these architectures are convoluted, requiring multiple models, advanced RAG (retrieval augmented generation) stacks, advanced data architectures and specialized expertise,” write Forrester analysts Jayesh Chaurasia and Sudha Maheshwari.  So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in.  AI architecture is far more complex than enterprises think Organizations that attempt to build agents on their own often struggle with retrieval augmented generation (RAG) and vector databases, Forrester senior analyst Rowan Curran told VentureBeat. It can be a challenge to get accurate outputs in expected time frames, and organizations don’t always understand the process — or importance of — re-ranking, which helps ensure that the model is working with the highest quality data.  For instance, a user might input 10,000 documents and the model may return the 100 most relevant to the task at hand, Curran pointed out. But short context windows limit what can be fed in for re-ranking. So, for instance, a human user may have to make a judgment call and choose 10 documents, thus reducing model accuracy.  Curran noted that RAG systems may take 6 to 8 weeks to build and optimize. For example, the first iteration may have a 55% accuracy rate before any tweaking; the second release may have 70% and the final deployment will ideally get closer to 100%.  Developers need to have an understanding of data availability (and quality) and how to re-rank, iterate, evaluate and ground a model (that is, match model outputs to relevant, verifiable sources). Additionally, turning the temperature up or down determines how creative a model will be — but some organizations are “really tight” with creativity, thus constraining things, said Curran.  “There’s been a perception that there’s an easy button around this stuff,” he noted. “There just really isn’t.”  A lot of human effort is required to build AI systems, said Curran, emphasizing the importance of testing, validation and ongoing support. This all requires dedicated resources.  “It can be complex to get an AI agent successfully deployed,” agreed Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI. Enterprises need access to various large language models (LLMs) and also have the ability to govern and monitor not only agents and models but underlying data and tools. “This is not a simple problem, and as time goes on there will be ever-increasing scrutiny over what and how data is being accessed by AI systems.”  Factors to consider when exploring AI agents When looking at options for deploying AI agents — third party, open source or custom — enterprises should take a controlled, tactical approach, experts advise.  Start by considering several important questions and factors, recommended Andreas Welsch, founder and chief AI strategist at consulting company Intelligence Briefing. These include:  Where does your team spend the majority of their time? Which tasks or steps in this process take up the most time? How complex are these tasks? Do they involve IT systems and accessible data?  What would being faster or more cost-effective allow your enterprise to do? And can (and how) do you measure benchmarks? It’s also important to factor in existing licenses and subscriptions, Welsch pointed out. Talk to software sales reps to understand whether your enterprise already has access to agent capabilities, and if so, what it would take to use them (such as add-ons or higher tier subscriptions). From there, look for opportunities in one business function. For instance: “Where does your team spend time on several manual steps that can not be described in code?” Later, when exploring agents, learn about their potential and “triage” any gaps.  Also, be sure to enable and educate teams by showing them how agents can help with their work. “And don’t be afraid to mention the agents’ limitations as well,” said Welsch. “This will help you manage expectations.” Build a strategy, take a cross-functional approach When developing an enterprise AI strategy, it is important to take a cross-functional approach, Curran emphasized. Successful organizations involve several departments in this process, including business leadership, software development and data science teams, user experience managers and others.  Build a roadmap based on the business’ core principles and objectives, he advised. “What are our goals as an organization and how will AI allow us to achieve those goals?” It can be difficult, no doubt because the technology is moving so fast, Curran acknowledged. “There’s not a set of best practices, frameworks,” he said. Not many developers have experience with post-release integrations and DevOps when it comes to AI agents. “The skills to build these things haven’t really been developed and quantified in a broad-based way.” As a result, organizations struggle to get AI projects (of all kinds) off the ground, and many eventually switch to a consultancy or one of their existing tech vendors that have the resources and capability to build on top of their tech stacks. Ultimately, organizations will be most successful when they work closely with their partners.  “Third-party providers will likely have the bandwidth to keep up with the latest technologies and architecture to build this,” said Curran.  That’s not to say that it’s impossible to build custom agents in-house; quite the contrary, he noted. For instance, if an enterprise has a robust internal development team and RAG and machine learning (ML) architecture,

How to get started with AI agents (and do it right) Read More »

CEO of Salesforce AI Clara Shih has left

The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Neither Shih nor the company has made an official announcement, but a source familiar with the matter confirmed to CIO that Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI. Evans has already updated his LinkedIn profile to reflect his new job title. In addition to corporate CEO Marc Benioff, Salesforce has a number of divisional or regional CEOs, of whom Shih was one. Some also hold the rank of executive vice president. source

CEO of Salesforce AI Clara Shih has left Read More »