How agentic RAG can be a game-changer for data processing and retrieval

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When large language models (LLMs) emerged, enterprises quickly brought them into their workflows. They developed LLMs applications using Retrieval-Augmented Generation (RAG), a technique that tapped internal datasets to ensure models provide answers with relevant business context and reduced hallucinations. The approach worked like a charm, leading to the rise of functional chatbots and search products that helped users instantly find the information they needed, be it a specific clause in a policy or questions about an ongoing project. However, even as RAG continues to thrive across multiple domains, enterprises have run into instances where it fails to deliver the expected results. This is the case of agentic RAG, where a series of AI agents enhance the RAG pipeline. It is still new and can run into occasional issues but it promises to be a game-changer in how LLM-powered applications process and retrieve data to handle complex user queries. “Agentic RAG… incorporates AI agents into the RAG pipeline to orchestrate its components and perform additional actions beyond simple information retrieval and generation to overcome the limitations of the non-agentic pipeline,” vector database company Weaviate’s technology partner manager Erika Cardenas and ML engineer Leonie Monigatti wrote in a joint blog post describing the potential of agentic RAG. The problem of ‘vanilla’ RAG While widely used across use cases, traditional RAG is often impacted due to the inherent nature of how it works. At the core, a vanilla RAG pipeline consists of two main components—a retriever and a generator. The retriever component uses a vector database and embedding model to take the user query and run a similarity search over the indexed documents to retrieve the most similar documents to the query. Meanwhile, the generator grounds the connected LLM with the retrieved data to generate responses with relevant business context. The architecture helps organizations deliver fairly accurate answers, but the problem begins when the need is to go beyond one source of knowledge (vector database). Traditional pipelines just can’t ground LLMs with two or more sources, restricting the capabilities of downstream products and keeping them limited to select applications only.  Further, there can also be certain complex cases where the apps built with traditional RAG can suffer from reliability issues due to the lack of follow-up reasoning or validation of the retrieved data. Whatever the retriever component pulls in one shot ends up forming the basis of the answer given by the model. Agentic RAG to the rescue As enterprises continue to level up their RAG applications, these issues are becoming more prominent, forcing users to explore additional capabilities. One such capability is agentic AI, where LLM-driven AI agents with memory and reasoning capabilities plan a series of steps and take action across different external tools to handle a task. It is particularly being used for use cases like customer service but can also orchestrate different components of the RAG pipeline, starting with the retriever component. According to the Weaviate team, AI agents can access a wide range of tools – like web search, calculator or a software API (like Slack/Gmail/CRM) – to retrieve data, going beyond fetching information from just one knowledge source.  As a result, depending on the user query, the reasoning and memory-enabled AI agent can decide whether it should fetch information, which is the most appropriate tool to fetch the required information and whether the retrieved context is relevant (and if it should re-retrieve) before pushing the fetched data to the generator component to produce an answer.  The approach expands the knowledge base powering downstream LLM applications, enabling them to produce more accurate, grounded and validated responses to complex user queries.  For instance, if a user has a vector database full of support tickets and the query is “What was the most commonly raised issue today?” the agentic experience would be able to run a web search to determine the day of the query and combine that with the vector database information to provide a complete answer. “By adding agents with access to tool use, the retrieval agent can route queries to specialized knowledge sources. Furthermore, the reasoning capabilities of the agent enable a layer of validation of the retrieved context before it is used for further processing. As a result, agentic RAG pipelines can lead to more robust and accurate responses,” the Weaviate team noted. Easy implementation but challenges remain Organizations have already started upgrading from vanilla RAG pipelines to agentic RAG, thanks to the wide availability of large language models with function calling capabilities. There’s also been the rise of agent frameworks like DSPy, LangChain, CrewAI, LlamaIndex and Letta that simplify building agentic RAG systems by plugging pre-built templates together.  There are two main ways to set up these pipelines. One is by incorporating a single agent system that works through multiple knowledge sources to retrieve and validate data. The other is a multi-agent system, where a series of specialized agents, run by a master agent, work across their respective sources to retrieve data. The master agent then works through the retrieved information to pass it ahead to the generator.  However, regardless of the approach used, it is pertinent to note that the agentic RAG is still new and can run into occasional issues, including latencies stemming from multi-step processing and unreliability. “Depending on the reasoning capabilities of the underlying LLM, an agent may fail to complete a task sufficiently (or even at all). It is important to incorporate proper failure modes to help an AI agent get unstuck when they are unable to complete a task,” the Weaviate team pointed out.  The company’s CEO, Bob van Luijt, also told VentureBeat that the agentic RAG pipeline could also be expensive, as the more requests the LLM agent makes, the higher the computational costs. However, he also noted that how the whole architecture is set up could make a difference in costs in the long run. “Agentic architectures

How agentic RAG can be a game-changer for data processing and retrieval Read More »

Do we need a European DARPA to cope with technological challenges in Europe?

The US Defense Advanced Research Projects Agency (DARPA) is often held as a model for driving technology advances. For decades, it has contributed to military and economic dominance by bridging the gap between military and civilian applications. European policymakers frequently reference DARPA in discussions, as outlined in the 2024 Draghi Report, but an EU equivalent has yet to materialise. To create such an agency, the governance and management of European innovation programmes would need drastic changes. DARPA supports disruptive innovation Founded in 1958, DARPA operates under the US Department of Defense (DoD) with a straightforward mission: to fund high-risk technological programmes that could lead to radical innovation. DARPA provides support throughout the innovation process, focusing on environments where new uses for technology must be invented or adapted. Although part of the DoD, DARPA funds projects that promise technological and economic superiority whether they align with current military priorities or not. DARPA has backed projects like ARPANET, the precursor to the internet, and the GPS. Today, DARPA shows interest in autonomous vehicles for urban areas and new missile technologies. As part of its core mission, DARPA accepts high financial risks on exploration projects and makes long-term commitments to these projects. Many emblematic successes explain why DARPA is a reference agency. However, the list of failed projects is even longer. Both failures and successes feed the exploration process in emerging industrial sectors. They represent opportunities to learn together and build collective strategies in innovation ecosystems. Five key principles of DARPA DARPA’s success stems not just from its stability but from adhering to five organisational principles that allow it to explore deep tech in an open innovation context: Independence: DARPA operates independently from other military services, research & development centres and federal agencies, allowing it to explore options outside dominant research paradigms. While cooperation is possible, its decisions and directions are not influenced by other parts of the federal administration. Agility: The agency’s flat organisational structure minimises bureaucracy. Its independent decision-making processes and streamlined contracting allow it to pivot quickly, test new concepts and collaborate with academic or private sector partners. Agility also enables DARPA to test new exploration or experimentation methods that are often based on user-centric approaches. Potential military or civilian end-users are involved very early in innovation projects to discuss potential uses and applications. This approach has recently led DARPA to absorb the Strategic Capabilities Office (SCO), where officers from the different military services (Army, Air Force, Navy and Marines) and all military ranks test new technological solutions (from different maturity levels), fostering co-creation processes with military innovators and expanding the agency’s impact. Sponsorship: High-ranking executives within the DoD and other federal administrations (NASA, Department of Energy) endorse, but do not commission, DARPA’s projects. This sponsorship model increases a project’s potential impact and allows for swift adaptation if a project fails. Community building: DARPA creates innovation communities with a mix of diverse expertise. By bringing different perspectives together, it fosters collective strategies essential for disruptive innovation. Diverse leadership: Project managers come from a range of backgrounds, including civilian experts, military officers and private-sector professionals. All have demonstrated scientific and technological expertise and a solid capability to bridge dreams and foresight with reality. All have a perfect command of risk and complexity management. Managers serve three- to four-year terms focused on driving technological disruption and building new innovation ecosystems. Their diverse expertise sets DARPA apart from other federal agencies. The challenge of a European DARPA The Draghi Report on European competitiveness suggests that a European DARPA could help bridge technological gaps, reduce dependencies and accelerate the green transition. However, implementing this model would require a seismic shift in how European agencies operate. Creating a new agency would be ineffective without ensuring that all principles underlying the success of DARPA are implemented in Europe. Even if Europe actively promotes deep tech and devotes significant budgets to it, European public policies and ways of working prevailing in national and European agencies are hardly consistent with the DARPA model. European agencies do not have much autonomy in their decisions about the exploration of new ventures or human resource management. They clearly demonstrate an outcome-focused orientation inconsistent with DARPA’s approach to risk. Two main challenges European agencies often lack the stable missions, scope and ambition seen at DARPA. The European Space Agency (ESA), the European Defence Agency (EDA) and Eurocontrol highlight the difficulties in developing cohesive, cross-border innovation ecosystems. A European DARPA would require a unified ambition among EU member states, a challenging feat given the institutional and geopolitical divides within Europe. The debates around the European Defence Fund illustrate how complex it is to reach consensus on shared objectives and funding. Adopting DARPA’s five organisational principles would represent a cultural revolution for European agencies in relation to EU bureaucratic norms and the budgetary controls of individual member states. Implementing these changes would also disrupt the existing power balance between countries. The DARPA model is inconsistent with the European “fair returns” model that refers to proportionality rules between funding, research operations and then industrial repartition during the production phase between member states in each project. The DARPA model would only focus on existing competencies, excellence, risk-taking approaches and entrepreneurial mindsets. Establishing a European DARPA would require a fundamental rethinking of public policy management in Europe. Its success would depend on whether European stakeholders are willing to adopt DARPA’s core principles, including its independence, agility and willingness to accept failure. Creating an agency is one thing; ensuring it adheres to the structures that make DARPA effective is another. The question remains: Is Europe ready for this transformation? The European Academy of Management (EURAM) is a learned society founded in 2001. With over 2,000 members from 60 countries in Europe and beyond, EURAM aims at advancing the academic discipline of management in Europe. David W. Versailles, Professor, strategic management and innovation management, co-director of PSB’s newPIC chair, PSB Paris School of Business and Valérie Mérindol, Enseignant chercheur en management de l’innovation et de la créativité, PSB

Do we need a European DARPA to cope with technological challenges in Europe? Read More »

Facts In Emails Aren't Confidential For Deposition, Judge Says

By Elliot Weld ( November 8, 2024, 7:28 PM EST) — A government contractor implicated in allegations that the U.S. infringed patents for contactless data carriers must turn over portions of a former employee’s emails because the correspondence contains facts not protected by attorney-client privilege, the U.S. Court of Federal Claims has ruled…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Facts In Emails Aren't Confidential For Deposition, Judge Says Read More »

Building The Future With AI At The Edge: Critical Architecture Decisions For Success

Edge intelligence marks a pivotal shift in AI, bringing processing and decision-making closer to where it matters most: the point of value creation. By moving AI and analytics to the edge, businesses enhance responsiveness, reduce latency, and enable applications to function independently — even when cloud connectivity is limited or nonexistent. As businesses adopt edge intelligence, they push AI and analytics capabilities to devices, sensors, and localized systems. Equipped with computing power, these endpoints can deliver intelligence in real time, which is crucial for applications such as autonomous vehicles or hospital monitoring where immediate responses are critical. Running AI locally bypasses network delays, improving reliability in environments that demand split-second decisions and scaling AI for distributed applications across sectors like manufacturing, logistics, and retail. For IT leaders, adopting edge intelligence requires careful architectural decisions that balance latency, data distribution, autonomy needs, security needs, and costs. Here’s how the right architecture can make the difference, along with five essential trade-offs to consider: Proximity for instant decisions and lower latencyMoving AI processing to edge devices enables rapid insights that traditional cloud-based setups can’t match. For sectors like healthcare and manufacturing, architects should prioritize proximity to offset latency. Low-latency, highly distributed architectures allow endpoints (e.g., internet-of-things sensors or local data centers) to make critical decisions autonomously. The trade-off? Increased complexity in managing decentralized networks and ensuring that each node can independently handle AI workloads. Decision-making spectrum: from simple actions to complex insightsEdge intelligence architectures cater to a range of decision-making needs, from simple, binary actions to complex, insight-driven choices involving multiple machine-learning models. This requires different architectural patterns: highly distributed ecosystems for high-stakes, autonomous decisions versus concentrated models for secure, controlled environments. For instance, autonomous vehicles need distributed networks for real-time decisions, while retail may only require local processing to personalize shopper interactions. These architectural choices come with trade-offs in cost and capacity, as complexity drives both. Distribution and resilience: independent yet interconnected systemsEdge architectures must support applications in dispersed or disconnected environments. Building robust edge endpoints allows operations to continue despite connectivity issues, ideal for industries such as mining or logistics where network stability is uncertain. But distributing intelligence means ensuring synchronization across endpoints, often requiring advanced orchestration systems that escalate deployment costs and demand specialized infrastructure. Security and privacy at the edgeWith intelligence processing close to users, data security and privacy become top concerns. Zero Trust edge architectures enforce access controls, encryption, and privacy policies directly on edge devices, protecting data across endpoints. While this layer of security is essential, it demands governance structures and management, adding a necessary but sophisticated layer to edge intelligence architectures. Balancing cost vs. performance in AI models and infrastructureEdge architectures must weigh performance against infrastructure costs. Complex machine-learning architectures often require increased compute, storage, and processing at the endpoint, raising costs. For lighter use cases, less intensive edge systems may be sufficient, reducing costs while delivering necessary insights. Choosing the right architecture is crucial; overinvesting may lead to overspending, while underinvesting risks diminishing AI’s impact. In summary, edge intelligence isn’t a “one size fits all” solution — it’s an adaptable approach aligned to business needs and operational conditions. By making strategic architectural choices, IT leaders can balance latency, complexity, and resilience, positioning their organizations to fully leverage the real-time, distributed power of edge intelligence. source

Building The Future With AI At The Edge: Critical Architecture Decisions For Success Read More »

Breaking down silos: Strategies for CIOs to foster cross-functional collaboration

Real-world examples of failures caused by organizational silos are plentiful. The tragic Boeing 737 Max crashes in 2018 and 2019 serve as a stark reminder of the dangers associated with fragmented communication and isolated teams. In the years leading up to the certification of the 737 Max, new Boeing software — Maneuvering Characteristics Augmentation System, or MCAS — was implemented to enhance the aircraft’s handling characteristics. However, the engineering team did not effectively communicate the system’s design, its implications, and potential risks to other departments, including safety and training teams. This breakdown in cross-functional communication had catastrophic consequences. In another example, the 2017 Equifax data breach exposed the personal information of approximately 147 million consumers. It was caused by a vulnerability that went unpatched due to a breakdown in communication between separate departments. This incident highlighted a severe lack of cross-functional security awareness, resulting in massive financial losses. These disasters emphasize the need for seamless collaboration across functions. In regular business life, the consequences are less dramatic but significantly impact resilience, innovation, and sustained growth. Understanding the urgency to break down silos The ability for teams to collaborate across the enterprise is directly linked to business success. IDC’s April 2024 Future Enterprise Resiliency and Spending Survey, Wave 4, showed that 43% of companies with a high level of success (greater than 80%) in GenAI proofs of concept and production rollouts have an effective process for coordination between IT and LOB teams. Furthermore, 47% of highly successful companies maintain strong relationships with strategic GenAI partners. As technology becomes increasingly central to business strategy, the need for IT to work seamlessly with other departments has never been more critical — yet many IT organizations still operate in isolation, focused on technical minutiae rather than broader business outcomes. Silos impede communication, stifle creativity, and hinder the agility needed to compete in a fast-paced business environment. This separation leads to redundant efforts, a lack of synergy, and a fundamental misalignment between IT capabilities and business needs. As CIOs, the task at hand is not just a nicety — it’s a strategic necessity that influences every facet of business. The CIO as a cross-functional leader In the “AI everywhere” era, the CIO’s role has evolved from merely overseeing IT infrastructure to becoming a pivotal force in driving cross-functional collaboration and organizational unity. To effectively break down silos and foster a culture of collaboration, CIOs must embrace an adapted leadership approach. The figure below provides an overview of the CIO as a cross-functional leader. IDC Holistic CIO leadership Modern CIOs must embrace holistic leadership, balancing technical expertise with business acumen and ethical considerations. This involves whole-brain thinking, adaptability, and an ecosystem orientation that views the organization as an interconnected system. Breaking down silos CIOs play a critical role in identifying and breaking down organizational silos. This includes program management to implement strategies for breaking down silos, such as analyzing current structures, information flows, and workflows. This involves setting shared goals and creating compelling narratives that resonate across departments. It also concerns identifying communication gaps and redundancies and proactively removing these barriers through strategic initiatives and the use of technology to improve communication between departments. Catalyst for synergies The effective CIO acts as a catalyst for cross-functional synergies, creating an environment where diverse teams can collaborate seamlessly. By fostering open communication channels and promoting a culture of shared goals and mutual respect, CIOs can unlock unprecedented levels of innovation and productivity. Embracing paradoxical leadership CIOs must effectively balance seemingly contradictory approaches such as strategy versus execution, innovation versus risk, and centralization versus decentralization. By seeking synergistic solutions, CIOs can create win-win scenarios that maximize benefits across multiple stakeholders’ perspectives. Systems thinking in action: Fostering empathy and integration CIOs must demonstrate systems thinking, which involves understanding how changes in one part of the organization can affect others. They should also cultivate cross-functional empathy — appreciating the challenges and priorities of different departments — and promote integrated problem-solving that brings together diverse perspectives. These skills are crucial for a CIO who wants to promote collaboration. By appreciating the interconnectedness of various functions and demonstrating empathy toward their unique challenges and perspectives, CIOs can drive integrated and innovative solutions that benefit the entire organization. Orchestrating governance frameworks A robust governance framework is essential for sustaining cross-functional collaboration. CIOs must orchestrate these frameworks to provide the necessary structure and oversight, ensure alignment with organizational goals, and facilitate quick adaptation to changes. This governance is the backbone for continuous improvement and effectiveness in collaborative efforts. Enabling technology Technology serves as an enabler, bridging gaps and connecting teams. Implementing advanced data-sharing platforms, communication tools, and collaborative software can significantly diminish the physical and cognitive boundaries between teams. CIOs must champion these technologies while maintaining security and leveraging AI-powered solutions to identify collaboration opportunities and automate processes. By embodying these principles, CIOs can transform their organizations into agile, collaborative powerhouses ready to tackle the challenges of the digital economy. As cross-functional leaders, CIOs not only drive technological innovation but also foster collaboration that percolates through the entire organization, ultimately leading to enhanced efficiency, improved customer experiences, and sustained competitive advantage. A strategic call to action Breaking down silos requires a structured approach. Here’s a road map for CIOs: Short-term initiatives Begin with small, impactful steps: conducting audits to identify silos and initiating pilot projects to demonstrate the value of cross-functional teams. Create pilot cross-functional teams to tackle specific challenges. Provide visibility and celebrate success to engage with more stakeholders. Medium-term tactics Scale successful pilots and formalize governance structures to build momentum. Invest in shared understanding by establishing a common language and aligning on shared goals. Create a dedicated platform for data and resource sharing. Long-term vision Maintain collaboration through regular review and refinement of processes. Build a resilient and adaptable IT organization with flexible structures. Measure success through collaboration-focused KPIs and OKRs. Conclusion By implementing these strategies, CIOs can transform their organizations into agile, collaborative powerhouses ready to tackle the challenges of the

Breaking down silos: Strategies for CIOs to foster cross-functional collaboration Read More »

Don't Let Broadband Maps Overstate Rural Overlap, FCC Told

By Jared Foretek ( November 12, 2024, 9:27 PM EST) — Rural telecoms are again urging the Federal Communications Commission to beware of overstated provider overlap in its National Broadband Map when allocating federal deployment funding, arguing that the map should be used as part of a holistic process to determine where money should be spent and not the sole determinant…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Don't Let Broadband Maps Overstate Rural Overlap, FCC Told Read More »

Georgia Tech Chip Design Program Empowers the Apple Workforce of Tomorrow

The U.S. has struggled with a worker shortage in semiconductor chip making, and even educating people about the field’s existence has proved challenging. In response, Apple and other companies have dedicated considerable money and time to addressing the skills gap and broken pipeline. Apple began the New Silicon Initiative, a series of grants to tech-focused universities nationwide, to develop more skilled workers in designing and manufacturing chips. The initiative funds education and training in microelectronic circuits and hardware design. Eight universities participate, chosen for their engineering savvy and commitment to scaling up courses in creating integrated circuits. One participant is Georgia Tech’s School of Electrical and Computer Engineering. ECE School Chair Arijit Raychowdhury spoke to TechRepublic about how Apple’s support has changed the school’s offerings and students’ potential places in the changing field of computer chip engineering and fabrication. What is NSI at Georgia Tech? In October, Georgia Tech celebrated the beginning of its NSI involvement, representing an expanded collaboration based on a successful chip tape-out course already offered at the university. “We’re thrilled to bring the New Silicon Initiative to Georgia Tech, expanding our relationship with its School of Electrical and Computer Engineering,” said Jared Zerbe, director of hardware technologies at Apple, in a press release. “Integrated circuits power countless products and services in every aspect of our world today, and we can’t wait to see how Georgia Tech students will help enable and invent the future.” The full partnership will kick off in January 2025. Apple engineers will present guest lectures, review projects in several IC design courses, give feedback to students, and participate in mentorships and networking events. Apple also funds teaching assistants. Those mentors can answer students’ questions about what jobs will be available to them once they acquire chip design skills. Georgia Tech students listen to presentations from ECE faculty members and Apple engineers during the NSI kickoff event in October. Image: Georgia Tech A highlight of the program is that the tape-out course offers students the opportunity to not only design their own chip but also have it fabricated and tested for bugs. This allows them to gain experience in revising and troubleshooting in conditions similar to those found in the real world. Graduates of the computer architecture, circuit design, and hardware technology courses at ECE can go on to be integrated circuit design engineers, chip design engineers, and analog designers. SEE: Apple’s M4 chip powers AI features in upcoming devices. “There was a huge interest among the students,” said Raychowdhury. “In the first semester, they designed a RISC-V microprocessor with some accelerators — and realize that these are seniors. These are not grad students. These are senior undergraduate students.” Those designs were manufactured on TSMC’s 65-nanometer process node and shipped back to the students. Then, the students could write test modules for their own chips. “Apple ended up hiring a bunch of the students from this first inaugural class,” Raychowdhury added. More about Innovation Training a workforce for tomorrow’s economy The success of the initial tape-out class led to Apple getting even more involved in coordinating with the school to meet its workforce needs. Raychowdhury said the school has had similar arrangements with companies like Texas Instruments, GlobalFoundries, and Absolics. Otherwise, “it’s very hard to find students who have that kind of expertise” in chip design, he said. When companies have a hand in the curriculum, some of what would normally be on-the-job training could be done in the classroom. “That reduces the ramp-up time of the students when they join any of these companies,” Raychowdhury added. Meanwhile, students will see that they are getting skills that lead directly to in-demand jobs. They have the space to “figure out whether this is something that they’re really passionate about,” said Raychowdhury. “Even in this huge area of semiconductor jobs, what exactly are they interested in? Whether it’s a design, whether it’s working in the fab, whether it’s packaging, and so on.” Research projects explore cutting-edge uses of AI One of the components students build in the tape-out class is a RISC-V microprocessor with an accelerator. Designed to solve linear algebra problems faster, this accelerator could be students’ first step into the hot field of designing the hardware behind generative AI. Georgia Tech and Apple’s efforts don’t focus on generative AI unless they pursue it as a more advanced research project. “There are some advanced research topics — they are not in a classroom setting yet — where students are actually pursuing ways to use AI, particularly language models, to design chips, including writing RTL,” Raychowdhury said. “That is one area which is growing in popularity.” Georgia Tech’s Professor Sung-Kyu Lim is working on using AI to accelerate backend processes for chip design, such as layout generating and routing, to reduce the time to market. Some graduate students have the opportunity to work collaboratively on that project. Providing the resources to cross the skills gap At Georgia Tech, up-and-coming engineers can work with technologies similar to the advanced manufacturing and processing tools they would use in everyday life as a chip designer. Georgia Tech’s AI maker space, launched in collaboration with NVIDIA, gives students access to H100 and H200 GPUs. That, in turn, gives them more processing power to figure out difficult chip design problems. Ultimately, the plan is to produce enough skilled workers to cross the skills gap. McKinsey found in 2024 that the number of people working in the semiconductor manufacturing workforce in the U.S. has dropped 43% from its peak in 2000. The country may need 88,000 semiconductor engineers by 2029, but only about 1,000 new technicians join the workforce yearly. As Raychowdhury explained: “We need a lot more engineers who can work in the fab, who can work in design, who can work in testing.” source

Georgia Tech Chip Design Program Empowers the Apple Workforce of Tomorrow Read More »

A look at the future of mainframe modernization with hybrid cloud

The value packed into mainframe data is immense. It’s decades of transactional data and history, it’s a comprehensive log of customer engagements, and it’s a view into how the operations of a business have evolved. But for as much value as it already holds, there’s even more potential waiting to be found. This is part of what has been driving the push to modernize mainframe systems for years now. As new technologies and strategies emerge, modern mainframes need to be flexible and resilient enough to support those changes. At the same time, many organizations have been pushing to adopt cloud-based approaches to their IT infrastructure, opting to tap into the speed, flexibility, and analytical power that comes along with it. Particularly as markets become more competitive, the ability to feed data into analytical models in cloud-native environments can help deliver real-time insights and ultimately carve out a competitive edge. To get the best of both of these environments, many organizations are opting to take a hybrid approach—leveraging the security and trusted reputation of mainframe systems with the analytical prowess and flexibility of cloud environments. And as hybrid strategies become more common, they will also impact the way businesses define successful mainframe modernization for the future. Bringing the mainframe and cloud together Choosing to adopt a hybrid cloud approach to tackle mainframe data is more than just a modernization project. It’s a decision that maps back to the overarching goals of a business and how they want to leverage their data. Bringing mainframe data into a cloud environment has the ability to unlock a new level of real-time analysis and insight from data that can reshape the way operations are managed across the enterprise. Among other benefits, a hybrid cloud approach to mainframe modernization allows organizations to: Leverage cloud-native technologies which, in turn, help optimize workloads for performance and scalability. This integration enhances the overall efficiency of IT operations. Improve skill sets across teams and boost financial flexibility by better utilizing existing mainframe expertise while also building new skills related to cloud technologies. In doing so, this generates greater financial flexibility by optimizing the allocation of existing resources.   Better leverage their mainframe data with near real-time access. This is made possible by incorporating cloud-based advanced analytics and AI applications and allows businesses to move toward a more data-driven decision-making process. Laying the foundation for successful hybrid cloud But for all those benefits, it’s important to not lose sight of the fact that a hybrid approach requires a robust, modern, IT infrastructure to support it. Accomplishing that means working with a partner that not only has a deep history of mainframe excellence and solutions that accelerate modernization, but also support bringing those systems together with cloud-native technologies. Whether it’s data intelligence tools that help democratize data and break down siloes, or data replication solutions that allow users to leverage mainframe data in cloud analytics models without exposing it to undue risk, the right modernization partner can help transform operations from the ground up. Learn more about how Rocket Software can help you make the most of a hybrid cloud approach to modernization. source

A look at the future of mainframe modernization with hybrid cloud Read More »

5th Circ. Remands Texas Social Media Law Challenge

By Jared Foretek ( November 8, 2024, 5:12 PM EST) — The Fifth Circuit remanded to the district court a challenge to Texas’ social media law prohibiting platforms from employing certain content moderation practices, ruling that the record on the case is still too undeveloped to resolve…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

5th Circ. Remands Texas Social Media Law Challenge Read More »

FTX Prosecutors Tout Tech Chief's 'Outstanding Cooperation'

By Rachel Scharf ( November 13, 2024, 8:07 PM EST) — Manhattan federal prosecutors urged a lenient sentence for former FTX technology chief Zixiao “Gary” Wang, telling the court on Wednesday that his “outstanding cooperation” was instrumental in securing the lightning-fast indictment and ultimate conviction of founder Sam Bankman-Fried for an $11 billion fraud that sank the crypto exchange…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FTX Prosecutors Tout Tech Chief's 'Outstanding Cooperation' Read More »