Information Week

An AI Prompting Trick That Will Change Everything for You

This year comes packed with challenges from rising inflation and layoffs to fake job announcements, ungodly long job interviews, and hiring delays. One way to help you land a promotion, possibly avoid a layoff, or rise to the head of the line of job candidates is to improve your AI skills.   To help you with that, here are several ways to use a phone picture as a prompt for AI models and apps like ChatGPT and Claude. Yes, phone pics can be used as prompts. Quick as a camera click, you’re prompting like a pro!   This information is drawn from my newest LinkedIn Learning online course Become a GenAI Power Prompter and Content Designer and my newest book Generative AI for Dummies, which was published last October.   1. How to use a phone photo in an AI prompt  ChatGPT and Claude will both allow you to attach files to your prompt. You do that by clicking on the paper clip icon beneath the prompt bar. Then select one or more files on your device that you want to attach to the prompt. In this case that will be the photo stored on your device that you want to include in your prompt.  Most people think of attaching only text, CSV files, and spreadsheets to a prompt. Those can be very helpful too in getting great and highly targeted responses from AI. But few realize that these models can extract information from photos too.  Related:Possibilities with AI: Lessons From the Paris AI Summit Some of ChatGPT and Claude’s competitors may be able to use photo data too, but for the purpose of illustrating this prompting tip, let’s just stick to these two AI chatbots for now.  2. What kind of phone pic makes a good prompt for AI?  The short answer is that a photo of anything containing text about something you want to know more about or that contains information that you want the AI to build upon, is a good photo to use in a prompt.   Choose a photo from your phone’s picture gallery and ask yourself what information does it contain that can be useful in a prompt for AI? Here are a few photo examples for you to consider what useful data they contain and what use that info may have for you. (You’ll have to move on to other tips below for the answers. But do this exercise first).   A phone pic of a slide that a keynote speaker is talking about in real time  A photo of a handwritten note you made on a napkin while chatting with other conference attendees about a business idea at the hotel bar one night  A photo of a page from a book   A photo of a broken machine part with information like model number, make, brand, etc.  3. Pop-up info from a keynote speaker’s slide in real time   Related:How to Regulate AI Without Stifling Innovation Speakers, good ones anyway, limit each of their slides to three or fewer bullet points. You might want to know more in order to follow the speaker’s presentation better. Take a quick phone pic of the slide on stage, attach it to the prompt bar in the ChatGPT mobile app and type your question or instruction in the prompt bar. An example is below. Voila! Instant popup information during the speaker’s speech!   Example prompt: “Extract the text from this pic and briefly explain the information in the second bullet point.”  4. From a handwritten note on a napkin to a bankable business plan  Every seasoned pro knows they often get as much out of networking at a conference as they do from the presentations, speeches, and breakout sessions. Now you can get even more value from networking over lunch, at a mixer, or over drinks at the hotel bar.   Suppose someone mentions an idea to you that you want to explore further, but you don’t want to rudely pull out your phone to make yourself a note. Jot it down on a napkin, or whatever paper or material is handy. Yes, any handwritten note will do. Stick that note in your pocket. Later, perhaps back in your hotel room, use your phone to take a photo of your note. You can then attach it to a prompt for ChatGPT or Claude in a mobile or desktop app at your convenience.  Related:GenAI Implementation: 3 Boxes Retailers Must Check Here’s an example prompt to write along with that photo attachment: “Build a business plan from the information you extract from the attached photo.”    5. Understand complex information by taking a picture of a page in a book  You’ve heard quantum computing is a looming threat to cyber security and a serious boost to AI capabilities. You’ve also heard year after year that *this* is the year quantum computing gets real. But you want to know more than the marketing hype. You want to know what quantum computing actually is and how far it has actually progressed.   Take a picture or a screenshot from a scientific paper or a book and use it as an attachment to a prompt to get ChatGPT or Claude to translate complex information into terms you can better understand. Now you know what you need to know.  6. Replace or fix a broken machine part at work using a phone pic and AI   So here you are in a datacenter doing routine maintenance on hardware. You discover a loose or broken part on a cooling system or a server or something. Now you need to report it to whoever is in charge of ordering parts or vendor repair visits. But heck, you’re not quite sure what to call that part or what info you need to request a replacement.    Or maybe you are in your office and your desk chair sinks when you sit in it even after you raise it up again and again. Imagine that whatever machine or furniture or tool that you’re working with or on poses

An AI Prompting Trick That Will Change Everything for You Read More »

GenAI Implementation: 3 Boxes Retailers Must Check

One in five retailers will deploy customer-facing generative AI applications by 2025, according to Forrester research. Alarmingly, the success rate of these projects is also only 20%. After interviewing AI/ML engineers, the RAND researchers found misaligned data, infrastructure, and objectives to be the main causes of failure.  Rather than taking a tech-first approach, retailers must first consider their business goals, then look at structural areas for improvement, and internal GenAI expertise. There is little room for error when it comes to customer-facing tools — one data breach or poorly handled experience could send customers away for good. In 2023, over half of the customers believed GenAI was a problem for customer service. Difficulty reaching an agent, receiving the wrong answers, and not being treated equally were top concerns.  Retailers and their IT practitioners must think carefully about their customer strategy, looking first at non-human interactions where GenAI can seamlessly integrate into the user experience (UX) and incrementally build their talent expertise for the best results. Security and empathy will be pivotal priorities for retailers to build consumer confidence in 2025.  Here are three things retailers must consider as they implement generative AI:  Related:Possibilities with AI: Lessons From the Paris AI Summit 1. Align data understanding with AI functionality A 2024 PMI Generative AI in Project Management Survey identified that the crucial skills for GenAI usage include the ability to work with data (45%), define task requirements (42%), prompt-writing skills (34%), validate GenAI outputs (30%), programming and logic skills (28%), and understanding LLM and NLP (22%).  Retailers and their employees should familiarize themselves with systematic services like GenAI-powered product recommendations, before trying their hands at more complex tasks. For example, GenAI-powered personalized product recommendations can use a simple, rule-based method such as, “Customers who bought X also bought Y.” The data input is more straightforward, too, using customer purchase history and basic demographics.  However, algorithms for targeted advertising campaigns, generating ad copy tailored to specific customer segments, and identifying optimal ad placement and timing are much more complex. These tools need up-to-date purchase history, browsing behavior, social media engagement, demographics, and location.   AI chat outputs are a complex function — don’t be fooled into thinking you can simply build a wrapper to a widely known large language model (LLM) like OpenAI’s ChatGPT, or Google’s Gemini. If you do not have a team of technical experts, consider working with leading no-code tools or hire a long-term AI partner who can multi-turn conversations across leading AI models.  Related:An AI Prompting Trick That Will Change Everything for You 2. Ensure airtight security  The most urgent security risks for GenAI users are all data related. The widespread adoption of GenAI has led to a 46% increase in data policy violations, primarily due to the sharing of sensitive source code.  Using public AI tools like ChatGPT or GitHub Copilot with sensitive code can inadvertently expose information. Moreover, the more disconnected systems are, the more entry points for security vulnerabilities. Threat actors can use GenAI to analyze existing malware, identify patterns, and then generate new, more sophisticated threats. They could rapidly generate new strains of malware that are harder to detect, or large volumes of targeted phishing emails, widening the attack surface.  Retailers and IT leaders should aim for a solid data foundation, streamlined workflows, and a well-connected network of applications. Developers must also ensure that access controls are carefully configured to reduce these risks. By using private, secure repositories, and conducting regular security audits to identify and address vulnerabilities, IT leaders can ensure a safer GenAI landscape for retailers.   Related:How to Regulate AI Without Stifling Innovation As a consumer of an enterprise application, you rely on the provider to implement effective security controls. To assess their security posture, investigate their control implementations, review design documents, and request independent third-party audit reports.  3. Make room for empathy where it is needed  After surveying 10,000 US customers across 282 brands on the six pillars of experience, empathy fell the most in 2023. Customers felt technology had become a substitute for human connection and care.  Rather than solely relying on technology going into 2025, companies should curate a blend of human and technological interactions. They can start with simple AI functions such as product recommendations and FAQ chatbots, and direct customers to an agent for more complex tasks. By setting aside a team to closely monitor the queries that do reach agents, retailers can begin to create chatbot decision trees to answer these needs automatically.  When a customer does get through to an agent, the representative must be ready. They must stay updated on product features, benefits, and promotions to provide accurate information and assist customers in making informed decisions. They should also engage customers in loyalty programs, track their points, and offer exclusive rewards based on the problem or need at hand. GenAI-powered alerts can help keep agents up to date with the latest company and product changes. These tools can improve and facilitate live agents’ work, increasing productivity in a hybrid approach, and ultimately enhancing the service offered.  GenAI-powered experiences are reaching customers across all industries, and retailers are no different. However, customers still desire that human touch. When retailers can seamlessly integrate automated services into their UX, customers can appreciate faster access to their suited products. But retailers must ensure they don’t bite off more than they can chew and start with limited problem-solving before integrating more advanced technology into their workflows.  source

GenAI Implementation: 3 Boxes Retailers Must Check Read More »

How to Regulate AI Without Stifling Innovation

Regulation has quickly moved from a dry, backroom topic to front-page news, especially as technology continues to quickly reshape our world. With the UK’s Technology Secretary Peter Kyle announcing plans to legislate AI risks this year, and similar being proposed for the US and beyond, how do we safeguard against the dangers of AI while allowing for innovation?  The debate over AI regulation is intensifying globally. The EU’s ambitious AI Act, often criticized for being too restrictive, has faced backlash from startups claiming it impedes their ability to innovate. Meanwhile, the Australian government is pressing ahead with landmark social media regulation and beginning to develop AI guardrails similar to those of the EU. In contrast, the US is grappling with a patchwork approach, with some voices, like Donald Trump, promising to roll back regulations to ‘unleash innovation.’  This global regulatory patchwork highlights the need for balance. Regulating AI too loosely risks consequences such as biased systems, unchecked misinformation, and even safety hazards. But over-regulation can also stifle creativity and discourage investment.   Striking the Right Balance  Navigating the complexities of AI regulation requires a collaborative effort between regulators and businesses. It’s a bit like walking a tightrope: Lean too far one way, and you risk stifling innovation; lean too far the other, and you could compromise safety and trust.   Related:Possibilities with AI: Lessons From the Paris AI Summit The key is finding a balance that prioritizes the key principles.  Risk-Based Regulation  Not all AI is created equal, and neither is the risk it carries.   A healthcare diagnostic tool or an autonomous vehicle clearly requires more robust oversight than, say, a recommendation engine for an online shop. The challenge is ensuring regulation matches the context and scale of potential harm. Stricter standards are essential for high-risk applications, but equally, we need to leave room for lower-risk innovations to thrive without unnecessary bureaucracy holding them back. We all agree that transparency is crucial to building trust and fairness in AI systems, but it shouldn’t come at the cost of progress. AI development is hugely competitive and often these AI systems are difficult to monitor with most operating as a ‘black box’ this raises concerns for regulators as being able to justify reasoning is at the core of establishing intent.   As a result, in 2025 there will be an increased demand for explainable AI. As these systems are increasingly applied to fields like medicine or finance there is a greater need for it to demonstrate reasoning, why a bot recommended a particular treatment plan or made a specific trade is a necessary regulatory requirement while something that generates advertising copy likely does not require the same oversight. This will potentially create two lanes of regulation for AI depending on its risk profile. Clear delineation between use cases will support developers and improve confidence for investors and developers currently operating in a legal grey area.  Related:An AI Prompting Trick That Will Change Everything for You Detailed documentation and explainability are vital, but there’s a fine line between helpful transparency and paralyzing red tape. We need to make sure that businesses are clear on what they need to do to meet regulatory demands.  Encouraging Innovation Regulation shouldn’t be a barrier, especially for startups and small businesses.   If compliance becomes too costly or complex, we risk leaving behind the very people driving the next wave of AI advancements. Public safety must be balanced, leaving room for experimentation or innovation.  My advice? Don’t be afraid to experiment. Try out AI in small, manageable ways to see how it fits into your organization. Start with a proof of concept to tackle a specific challenge — this approach is a fantastic way to test the waters while keeping innovation both exciting and responsible.  Related:GenAI Implementation: 3 Boxes Retailers Must Check AI doesn’t care about borders, but regulation often does, and that’s a problem. Divergent rules between countries create confusion for global businesses and leave loopholes for bad actors to exploit. To tackle this, international cooperation is vital, and we need a consistent global approach to prevent fragmentation and set clear standards everyone can follow.   Embedding Ethics into AI Development Ethics shouldn’t be an afterthought. Instead of relying on audits after development, businesses should embed fairness, bias mitigation, and data ethics into the AI lifecycle right from the start. This proactive approach not only builds trust but also helps organizations self-regulate while meeting broader legal and ethical standards.  What’s also clear is that the conversation must involve businesses, policymakers, technologists, and the public. Regulations must be co-designed with those at the forefront of AI innovation to ensure they are realistic, practical, and forward-looking.  As the world grapples with this challenge, it’s clear that regulation isn’t a barrier to innovation — it’s the foundation of trust. Without trust, the potential of AI risks being overshadowed by its dangers.   source

How to Regulate AI Without Stifling Innovation Read More »

Possibilities with AI: Lessons from the Paris AI Summit

The AI Action Summit held in Paris on Feb. 10 and Feb. 11 focused more on the possibilities than the perils of AI. French President Emmanuel Macron kicked off the event with a series of deepfaked videos of himself, seemingly more amused than concerned.   People — government leaders, tech executives, academics, and researchers among them — from more than 100 countries flocked to the event to talk about AI innovation, governance, public interest, trustworthiness, and its impact on the future of work.   InformationWeek spoke to three experts who attended the event to get a sense of some of the major themes that emerged from the third global AI summit.   Global Competition and Tension  While the AI Action Summit brought together people from around the world, a sense of competition remained strong. Macron urged Europe to take a more innovative stance in hopes of being of player in the AI race being run by China and the US.   US Vice President JD Vance took to the stage at the summit to declare that the US would be the dominant player in the AI space.   Georges-Olivier Reymond, cofounder and CEO of quantum computing company Pasqal, tells InformationWeek that hardware was a key discussion point at the summit. The US, for example, placed restrictions on AI chip exports.   Related:An AI Prompting Trick That Will Change Everything for You “Control the hardware, you have your sovereignty. And for me, that is one of the main takeaways of this event,” Reymond tells InformationWeek.   While Vance gave voice to the “America First” approach to AI, the US is still facing stiff competition. Earlier this year, DeepSeek burst onto the scene, seemingly giving China an edge in the global race for AI dominance. The company’s founder Liang Wenfeng did not attend the summit, but other stakeholders from China did. Chinese Vice Premier Zhang Guoqing spoke about a willingness to work with other countries on AI, Reuters reports.   Many countries in attendance, including France and China, signed an international agreement on “inclusive and sustainable” AI. But the US and UK are two notable holdouts, splintering hopes for a unified, global approach to AI.   Innovation vs. Regulation  In 2023, the first global AI meeting was held in the UK. The second was held in Seoul, South Korea, last year. This year marks a shift away from the emphasis these two events put on safety.   “Going into the AI Summit in Paris, France wanted to demonstrate the concrete benefits of AI, as opposed to solely its potential risks,” Michael Bradshaw, global applications, data, and AI practice leader at Kyndryl, an IT infrastructure services company, tells InformationWeek via email.   Related:How to Regulate AI Without Stifling Innovation Vance was vocal about prioritizing innovation over safety. “The AI future is not going to be won by hand-wringing about safety,” he said, the New York Times reports. And Macron called for Europe to move faster.   While innovation may be in the front seat, regulation still has a role to play if AI is to be safe and secure and actually deliver on the value it promises.   “My takeaways center on the opportunities we have to ensure that AI is deployed to benefit society broadly,” Matthew Victor, co-founder of the Massachusetts Platform for Legislative Engagement (MAPLE), a platform that facilitates legislative testimony, tells InformationWeek via email. “While the development of social media created an array of significant harms, we have an opportunity to ensure that AI technologies are deployed to drive economic opportunity and growth, while also strengthening our civic capacities and the resilience of our democracy.”  More Change Ahead   Given the speed with which AI is moving, policymakers are hard pressed to keep up.   “Yet, I believe global policymakers, especially through constructive industry engagement and events like the AI Action Summit that present an opportunity for dialogue, are advancing with the best intentions on behalf of their public and economic interests,” says Bradshaw.  Related:GenAI Implementation: 3 Boxes Retailers Must Check What the change ahead looks like could be hard to predict, but there are areas to watch.  For example, Reymond was invited to the summit to speak about quantum computing and AI. “It’s a clear signal that now AI and quantum are linked, and people recognize that,” he says.  Reymond anticipates that quantum could take a great leap forward in the next few years. “It could be a moment two to three years away, and it will have the same impact that ChatGPT [did],” he says. “And I think that the [governments] should be ready.”  When the next global AI summit arrives, to be hosted in India, world leaders and technology stakeholders will be facing the same big questions about AI leadership, its value, and its safety but just how much the technology has changed by then and how it will reshape the answers to those questions remains to be seen. source

Possibilities with AI: Lessons from the Paris AI Summit Read More »

How Will International Politics Complicate US Access to AI?

Sometimes “The Cost of AI” rests in the hands of political players. International politics can throw disruptive curves into companies’ plans and ambitions to leverage AI to remain competitive. The extent of such disruptions — or the negotiations to avoid them — could vary in influence based on how organizations respond. Attempts by the United States to limit China’s access to chips produced in Asia that support AI made the arrival of DeepSeek, a seemingly lower-cost alternative to OpenAI, feel like a gamechanger. It rattled some market assumptions about pricier hardware and pointed to the potential to use alternative sources of technology to drive AI plans forward. Could global needs for AI create “strange bedfellows” comparable to agreements seen in the pursuit of fossil fuels? Does a path forward exist for companies stymied by politics that risk narrowing access to international resources for AI technology? Ian Cohen, CEO of Lokker; Ted Krantz, CEO of Interos; Sahil Agarwal, co-founder and CEO of Enkrypt AI; and David Brauchler, technical director and head of AI and ML security for NCC Group, discussed those and other questions in this episode of DOS Won’t Hunt. Has DeepSeek changed the game in terms of materials and AI needs? Or does DeepSeek still need to be proven out before the rules of the game are rewritten? Related:Is AI Driving Demand for Rare Earth Elements and Other Materials? Is there any sense of communication between public and private sectors to try to mitigate potential issues with international access to materials and technology for AI? Does everyone need the “top tier” chips and materials to support their AI efforts? Are there AI needs/functions that are NOT beholden to access to the harder to obtain chips, hardware? Listen to the full podcast here. source

How Will International Politics Complicate US Access to AI? Read More »

If Everyone Uses AI, How Can Organizations Differentiate?

In some instances, it can be rather easy to spot traces of artificial intelligence at work — especially if there are common “tells” that surface in its use. Generative AI, at least for now, can be prone to produce illustrations that feature similar visual styles that repeat with each creation. What happens when companies rely on the results of AI’s work, and their rivals work with the same algorithms? Does the innovation and edge AI promises disappear? Or are there ways companies can differentiate how they use AI to stand out in the market? As InformationWeek kicks off “The Cost of AI series,” this episode of DOS Won’t Hunt brought together Andy Boyd, chief product officer with Appfire; Amol Ajgaonkar, CTO of product innovation with Insight; Mike Finley, CTO and co-founder for AnswerRocket; Kashif Zafar, CEO of Xnurta; and James Newman, head of product and portfolio marketing for Augury. The podcast panel discussed what happens if companies start to look like they are just copying each other when they use AI, what the ROI is for AI, and how organizations can differentiate what they get out of AI? Listen to the full podcast here. source

If Everyone Uses AI, How Can Organizations Differentiate? Read More »

The Cost of AI: Power Hunger — Why the Grid Can’t Support AI

Remember when plans to use geothermal energy from volcanoes to power bitcoin mining turned heads as examples of skyrocketing, tech-driven power consumption? If it possessed feelings, AI would probably say that was cute as it gazes hungrily at the power grid. InformationWeek’s “The Cost of AI” series previously explored how energy bills might rise with demand from artificial intelligence, but what happens if the grid cannot meet escalating needs? Would regions be forced to ration power with rolling blackouts? Will companies have to “wait their turn” for access to AI and the power needed to drive it? Will more sources of power go online fast enough to absorb demand? Answers to those questions might not be as simple as adding windmills, solar panels, and more nuclear reactors to the grid. Experts from KX, GlobalFoundries, and Infosys shared some of their perspectives on AI’s energy demands and the power grid’s struggle to accommodate this escalation. “I think the most interesting benchmark to talk about is the Stargate [project] that was just announced,” says Thomas Barber, vice president, communications infrastructure and data center at GlobalFoundries. The multiyear Stargate effort, announced late January, is a $500 billion plan to build AI infrastructure for OpenAI with data centers in the United States. “You’re talking about building upwards of 50 to 100 gigawatts of new IT capacity every year for the next seven to eight years, and that’s really just one company.” Related:How Will International Politics Complicate US Access to AI? That is in addition to Microsoft and Google developing their own data center buildouts, he says. “The scale of that, if you think about it, is the Hoover Dam generates two gigawatts per year. You need 50 new Hoover Dams per year to do it.” The Stargate site planned for Abilene, Texas would include power from green energy sources, Barber says. “It’s wind and solar power in West Texas that’s being used to supply power for that.” Business Insider reported that developers also “filed permits to operate natural gas turbines at Stargate’s site in Abilene.” Barber says as power gets allocated to data centers, in a broad sense, some efforts to go green are being applied. “It depends on whether or not you consider nuclear green,” he says. “Nuclear is one option, which is not carbon-centric. There’s a lot of work going into colocated data centers in areas where solar is available, where wind is available.” Barber says very few exponentials, such as Moore’s Law on microchips, last, but AI is now on the “upslope of the performance curve of these models.” Even as AI gets tested against more difficult problems, these are still the early training days in the technology’s development. Related:How CIOs Can Prepare for Generative AI in Network Operations When AI moves from training and more into inference — where AI draws conclusions — Barber says demand could be significantly greater, maybe even 10 times so, than with training data. “Right now, the slope is driven by training,” he says. “As these models roll out, as people start adopting them, the demand for inference is going to pick up and the capacity is going to go into serving inference.” A Nuclear Scale Matter The world already sees very hungry AI models, says Neil Kanungo, vice president of product led growth for KX, and that demand is expected to rise. According to research released in May by the Electric Power Research Institute (EPRI), data centers currently account for about 4 percent of electricity use in the United States, and project that number could rise as high as 9.1% by 2030. While AI training drives high power consumption, Kanungo says the ubiquity of AI inference makes its draw on power is significant as well. One way to improve efficiency, he says, would be to remove the transmission side of power from the equation by placing data centers closer to power plants. “You get huge efficiency gains by cutting inefficiency out, where you’re having over 30% losses traditionally in power generation,” Kanungo says. He is also a proponent of the use of nuclear power, considering its energy load and land usage impact. “The ability to put these data centers near nuclear power plants and what you’re transmitting out is not power,” he says. “You’re transmitting data out. You’re not having losses on data transmission.” Related:It Takes a Village: New Infrastructure Costs for AI — Utility Bills Nuclear power development in the United States, he says, has seen some stalling due to negative perspectives on safety and potential environmental concerns. Rising energy demands might be a catalyst to revisit such conversations. “This might be the right time to switch those perceptions,” Kanungo says, “because you have tech giants that are willing to take the risks and handle the waste, and go through the red tape, and make this a profitable endeavor.” He believes these are still the very early stages of AI adoption and as more agents are used with LLMs — with agents completing tasks such as shopping for users, filling out tabular data, or deep research — more computation is needed. “We’re just at the tip of the iceberg of agents,” Kanungo says. “The use cases for these transformer-based LLMs are so great, I think the demand for them is going to continue to go up and therefore we should be investing power to ensure that you’re not jeopardizing residential power … you’re not having blackouts, you’re not stealing base load.” Energy Hungry GPUs There is an unprecedented load being put on the grid according, to Ashiss Kumar Dash, executive vice president and global head – services, utilities, resources, energy and sustainability for Infosys. He says the power conundrum as it relates to AI is three-pronged. “The increase in demand for electricity, increase in demand for energy is unprecedented,” Dash says. “No other general-purpose technology has put this much demand in the past … they say a ChatGPT query consumes 10 times the energy that a Google search would.” (According to research

The Cost of AI: Power Hunger — Why the Grid Can’t Support AI Read More »

It Takes a Village: New Infrastructure Costs for AI — Utility Bills

Demand for artificial intelligence, from generative AI to the development of artificial general intelligence, puts greater burdens on power plants and water resources, which might also put the pinch on surrounding communities. The need to feed power to the digital beast to support trends, such as the rise of cryptocurrency, is not new but the persistent demand to build and grow AI calls new attention to the limits of such resources and inevitable rises in price. “The growth in power utilized by data centers is unprecedented,” says David Driggers, CTO for cloud services provider Cirrascale. “With the AI boom that’s occurred in the last 18 to 24 months, it is literally unprecedented on the amount of power that’s going to data centers and the projected amount of power going into data centers. Dot-com didn’t do this. Linux clustering did not this.” The hunger for AI led to a new race for energy and water that can be very precious in some regions. The goal might be to find a wary balance, but for now stakeholders are just looking for ways to keep up. “Data centers used to take up 1% of the world’s power, and that’s now tripled, and it’s still going up,” Driggers says. “That’s just insane growth.” In recent years, chipmakers such as Nvidia and AMD saw their sales to data centers ramp up in response to demand and expectations for AI, he says, as more users and companies dove into the technology. “A big part of it is just the power density of these platforms is significantly higher than anything that’s been seen before,” Driggers says. Related:The Real Cost of AI: An InformationWeek Special Report Feeding the Machines There was a time when an entire data center might need one megawatt of power, he says. Then that became the power scale to support just a suite — now it can take five megawatts to do the job. “We’re not a hyperscaler but even within our requirements, we’re seeing over six months, our minimum capacity requirements are doubling,” Driggers says. “That’s hard to keep up with.” The runaway demand might not be simple to respond to given the complexities of regulations, supply, and the costs this all brings. Evan Caron, co-founder and chief investment officer, Montauk Climate, says a very complicated interdependency exists between public and private infrastructure. “Who bears the cost of infrastructure buildout? What markets are you in? There’s a lot of nuance associated with where, what, when, how, et cetera.” There is no catchall answer to this demand, he says, given local and regional differences in resources and regulations. “It’s very hard to assume the same story works for every part, every region in the US, every region globally,” Caron says, “who ultimately bears the cost, whether it’s inflationary, whether it’s ultimately deflationary.” Related:What Is the Cost of AI: Examining the Cost of AI-Enabled Apps Even before the heightened demand for AI, data centers already came with significant utility price tags. “Generally speaking, a data center uses a lot of land, a lot of water — fresh water — a lot of power,” Caron says. “And you need to be able to build infrastructure to support the needs of that customer.” Depending on where in the US the data center is located, he says there can be requirements for data centers to build substations, transmission infrastructure, pipeline infrastructure, and roads, which all add to the final bill. “Some of it will be borne by the consumers in the market,” Caron says. “The residential customers, the commercial customers that aren’t the data center are going to get charged a share of the cost to interconnect that data center.” Still, it is not as simple as hiking up prices any time demand increases. Utility companies typically must present before their respective utility commissions the plans to provide those services, their need to build transmission lines, and more to determine whether it is worth making such upgrades, Caron says. Related:AI’s Hidden Cost: Will Data Preparation Break Your Budget? “That’s why you’re seeing a lot of pushback,” he says, “because the assets that are going behind the meter get unfair subsidies from a utility, from a transmission company, from a generation company.” This can increase costs passed on to other consumers. It does not have to be that way though. If hyperscalers were required to front the entire bill for such new infrastructure, Caron says, it could be argued that it would be a benefit to the rest of the customers and community. However, that is not the current state of affairs. “They’re not interested in bearing the cost across the board” he says, “so they’re pushing a lot of those costs back to consumers.” The first several years of such buildouts could be very inflationary, Caron says. The promise of AI — to deliver smarter systems that are more efficient with lower costs of living — would ultimately be deflationary. In the near term, however, there is a supply and demand imbalance, he says. “You have more demand than supply; prices have to rise to meet that.” That could lead to increased costs across technology-driven regions with elevated competition for resources. “It’s going to be very inflationary for a long time,” Caron says. He foresees the Trump administration moving to rip out regulation based on a narrative that these processes can be easier, but state governments and the federal governments have distinct powers that can make this more complex than solving the problem with the stroke of one pen “Utilities are regulated monopolies in the state,” Caron says. “There’s almost 3,000 separate utilities in North America.” Multiple stakeholders, incumbent energy companies, independent power producers, and the fairness doctrine around antitrust are all elements that come into play in this energy race. “You’re not going to get everyone to be aligned around the same set of expectations,” Caron says. Consumers want prices to go down, he says, while energy generators can want prices to go up, transmission companies

It Takes a Village: New Infrastructure Costs for AI — Utility Bills Read More »

Securing a Better Salary: Tips for IT Pros

Negotiating a higher salary or better benefits can be daunting, but IT professionals can strengthen their case by aligning their contributions with organizational goals and adopting strategic approaches.  The key to securing a raise lies in preparation, communication, and demonstrating measurable value to higher-ups. Quantifiable metrics are crucial during salary discussions, as they provide clear evidence of your impact. Key performance indicators (KPIs) to highlight include revenue generation, cost savings, productivity improvements, customer satisfaction, and security or risk mitigation.   Demonstrating how your contributions align with these metrics makes a compelling case for your value to the organization.  Scott Wheeler, cloud practice lead at Asperitas, says it’s important to start raise negotiation preparations by understanding the organization’s strategic and tactical goals.  Taking on projects that are both impactful and achievable shows alignment with the company’s priorities. “Identify work that aligns with those goals and has reasonable delivery timelines, preferably under a year,” Wheeler says.  He adds that building a productive rapport with managers is another cornerstone of effective salary negotiations. “Understand what your manager values and what they will be evaluated on,” Wheeler says. “Align your work with their goals and share progress on your projects regularly.”  Related:The Cost of AI Talent: Who’s Hurting in the Search for AI Stars? He says establishing a personal connection with higher-ups can also help. “Knowing what your manager values, both in and outside of work, creates a better partnership and makes communication easier,” Wheeler explains.  Megan Smith, head of HR at SAP North America, says she agrees the more an employee can master the art of communicating proactively with their manager, the greater the trust they can build.  “This includes things like sharing the right level of information at the right time,” she explains via email.  For example, providing a heads up around possible risks in a project, and sharing summary updates regularly of what is being accomplished, helps the manager trust they have the right degree of visibility necessary for the overall success of the team.   Salary as Reflection of Performance   Smith says having a conversation with your manager about your salary is really a conversation about how you are achieving your goals, because a salary increase reflects your performance.  “Discuss your performance with your manager early and often, so that when you want to connect it to salary, which can be done at any time but recommend at least a couple months prior to the salary review timeline of your company, this is a natural connection,” she says.  Related:Tech Company Layoffs: The COVID Tech Bubble Bursts She recommends approaching salary conversations with curiosity, for example by asking your manager how they perceive your salary aligning to your contributions and impact.  “Get educated on your own point of view,” she adds. “Do you have any data from internal salary ranges to suggest if you are positioned low?”  Smith says it’s important that you don’t make it about “asking for a raise” but rather, make the conversation about an informed discussion about how your salary reflects your contributions, and if that presents opportunity for an increase in the next salary review cycle.   IT as a Leadership Profession  From the perspective of Mark Ralls, president at Auvik, the nature of IT work provides ample opportunities for IT pros to show leadership even if they are not in a formal managerial role.   “Cross-functional or team-based project work allows IT pros to demonstrate the ability to manage through influence, where they help coordinate the efforts of others through relationship building and persuasion rather than formal authority,” he says.   Wheeler also emphasizes the importance of teamwork and collaboration in achieving goals.   Related:Shopping for an LLM? Here’s What to Know About Pricing “Form partnerships, either internally or externally, that can help you deliver results,” Wheeler says. “Most work requires a team effort, and sometimes moving to a different internal team may be necessary to produce the desired outcome.”  Documenting and showcasing these successes are critical to building a strong case during salary discussions.  Success in salary negotiations also depends on effective communication and the ability to understand and address the motivations of various stakeholders to align everyone with a common objective.  “Gaining buy-in and achieving desired outcomes by establishing credibility and trust is a key indicator that someone is ready for that next step to management, earning a raise and potentially a promotion in the process,” Ralls says.   A recent engineering career mobility report by SignalFire indicates specialization is a key way to turbocharge upward mobility — and with it, salary bumps.   Jarod Reyes, head of developer community at SignalFire, says instead of focusing on a general KPI around developer productivity, he would focus on finding a project, or place in the engineering organization where one can become the specialist.  “We can see in the data that specialization is the key to rapid upward mobility for engineers happy in their current role,” he says. “We could see engineers who wanted to move into management roles would take paths that developed more broad skill sets, expanding their surface area and sphere of influence.”  This includes finding ways to lead a project and looking for opportunities to improve the business or reduce costs — what Reyes calls “sure fire bets”.  He notes that engineers who wanted to move up a non-management path (down a specialist path, like principal or staff engineer) focused on narrowing their skill sets, taking roles where they were expected to be the directly responsible individual like a site-reliability engineer or data architect.  Reyes says from personal experience managing engineering teams and building engineering teams for the last 13 years he could say communicating often with the team about the values that are rewarded is very important.  “Having direct conversations not just annually, but monthly with your engineers is an important way of building trust and earning loyalty,” he says. “I think more important than upward mobility I have found that engineers really enjoy working on a team that is crucial, efficient and impact oriented.”  source

Securing a Better Salary: Tips for IT Pros Read More »

AI’s Hidden Cost: Will Data Preparation Break Your Budget?

During many major tech conferences and events in 2024, talk of implementing artificial intelligence was a common theme as IT leaders are tasked with creating new GenAI tools for business. But a common refrain was the need to prepare data for machine learning. That need for clean data may slow AI launch efforts and add to costs. A recent Salesforce report found CIOs are spending a median of 20% of their budgets on data infrastructure and management and only 5% on AI. A lack of trusted data ranked high on the list of CIOs’ main AI fears. In another report, research firm International Data Corporation (IDC) says worldwide spending on AI will reach $632 Billion in 2028. The industry was caught off guard as OpenAI’s ChatGPT quickly launched the GenAI arms race two years ago — many companies are faced with juggling data needs with getting that data AI-ready. Spending on data preparation could be a significant upstart cost for AI, varying with the size and maturity of different businesses and organizations. Preparing data for AI is a tricky and potentially costly task. IT leaders must consider several factors, including quality, volume, complexity of data, along with preparing for costs associated with data collection, cleaning, labeling, and conversion suitable for an AI model. When added on top of needs for new hardware, software, and labor costs associated with GenAI adoption, and the bills add up quickly. Related:How CIOs Can Prepare for Generative AI in Network Operations CIOs and other tech leaders are faced with presenting AI as a potential value creator and possible revenue generator. But many companies face an uphill battle when it comes to ROI on new GenAI programs, the time and cost to prepare data often doesn’t lead to immediate returns. Spending Money on Data to Make Money with AI Barb Wixom, author and principal research scientist at MIT’s Center for Information Systems Research (MIT CISR), says leaders can point to specific successes at other companies that have more mature AI rollouts. Those companies, she says, have built strong data value through forward-looking governance. “AI has to be viewed, not as AI, but as a part of the data value creation or data realization,” she tells InformationWeek in a phone interview. “I call it data monetization … converting data to money. If organizations and especially leaders just consistently think about AI in that context, you won’t have a problem … if an organization is trying to reduce its cost structure by a certain percentage, or trying to increase sales in some way, or increase service growth — whatever the objective is — that’s often big money. Even if you have an extraordinary investment in AI, the outcome could be orders of magnitude greater.” Related:It Takes a Village: New Infrastructure Costs for AI — Utility Bills With tech budgets tightening in the face of macroeconomic woes, IT leaders need to convince non-technical members of the C-suite that data preparation is a worthwhile investment. Wixom points to success stories in the financial services industry where IT leaders had strong credibility within their executive team. One such leader, she says, used an internal consulting group to accumulate use cases to present a more traditional business plan to executives. “They road-mapped how they were going to build out over four years — they were able to deliver that,” Wixom says. But other organizations may not be as mature in their data governance as a major financial institution. In those cases, an incremental, bottom-up approach can be effective as well. “You don’t have to start with the vision of all that’s going to be done … but by taking an incremental approach that builds capability, where you learn along the way and establish not silos, but a growing enterprise resource.” The next step: Finding the right architecture to align with your AI goals. Data mesh and data fabric are two competing modern data architecture frontrunners that are similar but have key differences. Related:Digital Mindset: The Secret to Bottom-Up GenAI Productivity Mesh or Fabric? Modern Data Architectures In the pre-GenAI era, data governance was relatively straightforward. Many companies pooled data into “data lakes” that stored large amounts of raw data. For AI use, that generalized architecture can create bottlenecks that hinder productivity. Data fabric and data mesh architectures are becoming the new industry standards when it comes to GenAI implementation. That’s because these modern architectures integrate data from multiple sources into a unified view, simplifying data maintenance, and reducing time and costs. Data Mesh: Using a data mesh architecture can be a good option for those looking to empower separate business units with data ownership. Data Fabric: Data fabric offers centralized architecture, integrating data across an organization. This method allows a unified data structure with a central governance. But those new architectures come with a price. Higher startup costs and ongoing maintenance fees can pose significant barriers to entries for some enterprises, depending on the size and current state of data governance. Data mesh will likely have higher up front costs. Data fabric has lower implementation costs but will likely cost more to maintain. So, it’s important to understand potential use cases to justify the spend and to understand which architecture is right for your organization, experts say. Inna Tokarev Sela, chief executive officer and founder of data fabric firm Illumex, points to specific use cases that can most benefit from modern data architectures. She says organizations that can most benefit from data fabric include those “which aspire to create a degree of automation, self-service access to data analytics by business users, workflow automation, and process automation.” She says businesses with disparate teams who need to use data to build analytics and collaborate can benefit from a data fabric architecture. “Data fabric and data mesh are like the Montagues and Capulets, or the Hatfields and McCoys,” says Kendall Clark, co-founder and CEO of data firm Stardog. “It’s like a frenemy rivalry … they are so similar that nobody can tell them apart,

AI’s Hidden Cost: Will Data Preparation Break Your Budget? Read More »