Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,235,016 followers

    On Monday, a United States District Court ruled that training LLMs on copyrighted books constitutes fair use. A number of authors had filed suit against Anthropic for training its models on their books without permission. Just as we allow people to read books and learn from them to become better writers, but not to regurgitate copyrighted text verbatim, the judge concluded that it is fair use for AI models to do so as well. Indeed, Judge Alsup wrote that the authors’ lawsuit is “no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works.” While it remains to be seen whether the decision will be appealed, this ruling is reasonable and will be good for AI progress. (Usual caveat: I am not a lawyer and am not giving legal advice.) AI has massive momentum, but a few things could put progress at risk: - Regulatory capture that stifles innovation, including especially open source - Loss of access to cutting-edge semiconductor chips (the most likely cause would be war breaking out in Taiwan) - Regulations that severely impede access to data for training AI systems Access to high-quality data is important. Even though the mass media tends to talk about the importance of building large data centers and scaling up models, when I speak with friends at companies that train foundation models, many describe a very large amount of their daily challenges as data preparation. Specifically, a significant fraction of their day-to-day work follows the usual Data Centric AI practices of identifying high-quality data (books are one important source), cleaning data (the ruling describes Anthropic taking steps like removing book pages' headers, footers, and page numbers), carrying out error analyses to figure out what types of data to acquire more of, and inventing new ways to generate synthetic data. I am glad that a major risk to data access just decreased. Appropriately, the ruling further said that Anthropic’s conversion of books from paper format to digital — a step that’s needed to enable training — also was fair use. However, in a loss for Anthropic, the judge indicated that, while training on data that was acquired legitimately is fine, using pirated materials (such as texts downloaded from pirate websites) is not fair use. Thus, Anthropic still may be liable on this point. Other LLM providers, too, will now likely have to revisit their practices if they use datasets that may contain pirated works. Overall, the ruling is positive for AI progress. Perhaps the biggest benefit is that it reduces ambiguity with respect to AI training and copyright and (if it stands up to appeals) makes the roadmap for compliance clearer.... [Truncated due to length limit. Full text: https://lnkd.in/gAmhYj3k ]

  • In January, everyone signs up for the gym, but you're not going to run a marathon in two or three months. The same applies to AI adoption. I've been watching enterprises rush into AI transformations, desperate not to be left behind. Board members demanding AI initiatives, executives asking for strategies, everyone scrambling to deploy the shiniest new capabilities. But here's the uncomfortable truth I've learned from 13+ years deploying AI at scale: Without organizational maturity, AI strategy isn’t strategy — it’s sophisticated guesswork. Before I recommend a single AI initiative, I assess five critical dimensions: 1. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: Can your systems handle AI workloads? Or are you struggling with basic data connectivity? 2. 𝗗𝗮𝘁𝗮 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: Is your data accessible? Or scattered across 76 different source systems? 3. 𝗧𝗮𝗹𝗲𝗻𝘁 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Do you have the right people with capacity to focus? Or are your best people already spread across 14 other strategic priorities? 4. 𝗥𝗶𝘀𝗸 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: Is your culture ready to experiment? Or is it still “measure three times, cut once”? 5. 𝗙𝘂𝗻𝗱𝗶𝗻𝗴 𝗮𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁: Are you willing to invest not just in tools, but in the foundational capabilities needed for success? This maturity assessment directly informs which of five AI strategies you can realistically execute: - Efficiency-based - Effectiveness-based - Productivity-based - Growth-based - Expert-based Here's my approach that's worked across 39+ production deployments: Think big, start small, scale fast. Or more simply: 𝗖𝗿𝗮𝘄𝗹. 𝗪𝗮𝗹𝗸. 𝗥𝘂𝗻. The companies stuck in POC purgatory? They sprinted before they could stand. So remember: AI is a muscle that has to be developed. You don't go from couch to marathon in a month, and you don't go from legacy systems to enterprise-wide AI transformation overnight. What's your organization's AI fitness level? Are you crawling, walking, or ready to run?

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    145,682 followers

    Last week, I heard from a super impressive customer who has cracked the code on how to give salespeople something they’ve always wanted: more selling time. Here’s how he transformed their process. This customer runs the full B2B sales motion at an awesome printing business based in the U.S. For years, his team divided their time across six key areas: 1. Task prioritization 2. Meeting prep 3. Customer responses 4. Prospecting 5. Closing deals 6. Sales strategy Like every sales leader I know, he wants his team to spend most of their time on #5 and #6 — closing deals and sales strategy. But together, those only made up about 30% of their week. (Hearing this gave me flashbacks to my time in sales…and all that admin tasks 😱) Now, his team uses AI across the sales process to compress the amount of time spent on #1-4: 1. Task prioritization → AI scores leads and organizes daily tasks 2. Meeting prep → AI surfaces insights from calls and contact records before meetings 3. Customer responses → Breeze Customer Agent instantly answers customer questions 4. Prospecting → Breeze Prospecting Agent automatically researches accounts and books meetings The result? Higher quantity of AI-powered work: More prospecting. More pipeline.  Higher quality of human-led work: More thoughtful conversations. Sharper strategy. This COO's story made my week. It's a reminder of just how big a shift we're going through – and why it’s such an exciting time to be in go-to-market right now.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    680,031 followers

    Lately, I’ve been getting a lot of questions around the difference between 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜, 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀, and 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜. Here’s how I usually explain it — without the jargon. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 This is what most people think of when they hear “AI.” It can write blog posts, generate images, help you code, and more. It’s like a super-smart assistant — but only when you ask. No initiative. No memory. No goals. Tools like ChatGPT, Claude, and GitHub Copilot fall into this bucket. 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 Now we’re talking action. An AI Agent doesn’t just answer questions — it 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝗻𝗴𝘀. It can: • Plan tasks • Use tools • Interact with APIs • Loop through steps until the job is done Think of it like a junior teammate that can handle a process from start to finish — with minimal handholding. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 This is where things get interesting. Agentic AI is not just about completing a single task. It’s about having 𝗴𝗼𝗮𝗹𝘀, 𝗺𝗲𝗺𝗼𝗿𝘆, and the ability to 𝗮𝗱𝗮𝗽𝘁. It’s the difference between: "Write me a summary" vs. "Go read 50 research papers, summarize the key trends, update my Notion, and ping me if there’s anything game-changing." Agentic AI behaves more like a 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺 than a chatbot. It can collaborate, improve over time, and even work alongside other agents. Personally, I think we’re just scratching the surface of what agentic systems can do. We’re moving from building apps to 𝗱𝗲𝘀𝗶𝗴𝗻𝗶𝗻𝗴 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀. And that’s a massive shift. Curious to hear from others building in this space — what tools or frameworks are you experimenting with? LangGraph, AutoGen, CrewAI ?

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    64,581 followers

    This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.

  • View profile for Chip Huyen
    Chip Huyen Chip Huyen is an Influencer

    Building something new | AI x storytelling x education

    290,728 followers

    LinkedIn has published one of the best reports I’ve read on deploying LLM applications: what worked and what didn’t. 1. Structured outputs They chose YAML over JSON as the output format because YAML uses less output tokens. Initially, only 90% of the outputs are correctly formatted YAML. They used re-prompting (asking the model to fix its YAML responses), which increased the number of API calls significantly. They then analyzed the common formatting errors, added those hints to the original prompt, and wrote an error fixing script. This reduced their errors to 0.01%. 2. Sacrificing throughput for latency Originally, they focused on TTFT (Time To First Token), but realized that TBT (Time Between Token) hurt them a lot more, especially with Chain-of-Thought queries where users don’t see the intermediate outputs. They found that TTFT and TBT inversely correlate with TPS (Tokens per Second). To achieve good TTFT and TBT, they had to sacrifice TPS. 3. Automatic evaluation is hard One core challenge of evaluation is coming up with a guideline on what a good response is. For example, for skill fit assessment, the response: “You’re not a good fit for this job” can be correct, but not helpful. Originally, evaluation was ad-hoc. Everyone could chime in. That didn’t work. They then have linguists build tooling and processes to standardize annotation, evaluating up to 500 daily conversations and these manual annotations guide their iteration. Their next goal is to get automatic evaluation, but it’s not easy. 4. Initial success with LLMs can be misleading It took them 1 month to achieve 80% of the experience they wanted, and additional 4 months to surpass 95%. The initial success made them underestimate how challenging it is to improve the product, especially dealing with hallucinations. They found it discouraging how slow it was to achieve each subsequent 1% gain. #aiengineering #llms #aiapplication

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    VP of AI Platform @IBM

    199,213 followers

    🚨 MIT Study: 95% of GenAI pilots are failing. MIT just confirmed what’s been building under the surface: most GenAI projects inside companies are stalling. Only 5% are driving revenue. The reason? It’s not the models. It’s not the tech. It’s leadership. Too many executives push GenAI to “keep up.” They delegate it to innovation labs, pilot teams, or external vendors without understanding what it takes to deliver real value. Let’s be clear: GenAI can transform your business. But only if leaders stop treating it like a feature and start leading like operators. Here's my recommendation: 𝟭. 𝗚𝗲𝘁 𝗰𝗹𝗼𝘀𝗲𝗿 𝘁𝗼 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵. You don’t need to code, but you do need to understand the basics. Learn enough to ask the right questions and build the strategy 𝟮. 𝗧𝗶𝗲 𝗚𝗲𝗻𝗔𝗜 𝘁𝗼 𝗣&𝗟. If your AI pilot isn’t aligned to a core metric like cost reduction, revenue growth, time-to-value... then it’s a science project. Kill it or redirect it. 𝟯. 𝗦𝘁𝗮𝗿𝘁 𝘀𝗺𝗮𝗹𝗹, 𝗯𝘂𝘁 𝗯𝘂𝗶𝗹𝗱 𝗲𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱. A chatbot demo is not a deployment. Pick one real workflow, build it fully, measure impact, then scale. 𝟰. 𝗗𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗵𝘂𝗺𝗮𝗻𝘀. Most failed projects ignore how people actually work. Don’t just build for the workflow but also build for user adoption. Change management is half the game. Not every problem needs AI. But the ones that do, need tooling, observability, governance, and iteration cycles; just like any platform. We’re past the “try it and see” phase. Business leaders need to lead AI like they lead any critical transformation: with accountability, literacy, and focus. Link to news: https://lnkd.in/gJ-Yk5sv ♻️ Repost to share these insights! ➕ Follow Armand Ruiz for more

  • View profile for Spencer Dorn
    Spencer Dorn Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC | Balanced healthcare perspectives

    17,761 followers

    We continually hear promises that AI will reduce clinicians’ burdens. Yet, we almost never hear about the new burdens AI creates. This NEJM AI perspective explains that because LLMs are imperfect, they create a new and tedious burden of high-stakes proofreading and editing. Meanwhile, disclaimers to cover liability shift accountability from developers to physicians. Reading this piece made me think of two examples of AI making our work harder: 1️⃣ Chinese radiologists who used AI heavily felt more emotionally exhausted and burned out. It turns out that AI often increases interpretation times, especially when abnormalities are reported. [doi:10.1001/jamanetworkopen.2024.48714] 2️⃣ UCSD primary care physicians who used ChatGPT to respond to patient messages paradoxically spent 22% more time on the task. [doi:10.1001/jamanetworkopen.2024.3201]. Of course, AI often makes our work easier. For example, AI scribes often speed up documentation. Also, as I explained in two recent Forbes articles, trustworthy AI summaries help us process medical literature and patient information more effectively and efficiently. The point is that we must remain clear-eyed and continually ask what we gain and what we lose with AI. Only then will we be able to intelligently decide whether, when, and how to use AI.

  • View profile for Aaron Levie
    Aaron Levie Aaron Levie is an Influencer

    CEO at Box - Intelligent Content Management

    91,758 followers

    A big question when building AI Agents for the enterprise is where the greatest amount of economic value is in AI Agents, which often ties directly to how differentiated your AI Agent is and your ability to monetize it. 1. For the most basic AI query or assistant experiences, the economic potential will mostly correlate to how proprietary the data is that your Agents are working off of. For pure public data this is harder to differentiate on and the productivity can be squishier; but the value can be expanded when the Agent has access to domain specific information, data from tools, or corporate knowledge, and especially where there are direct productivity gains that can be measured. 2. As AI Agents can execute narrow tasks, like reading documents and extracting data, typing ahead as you generate code for a project, or generating new content, the economic potential goes up quite a bit. These AI Agents will often need access to corporate data, have access to tools, and be able to work across multiple platforms. These Agents start to approximate the value of a discrete task inside of a business process, and thus their productivity can be directly measured. 3. Then, we'll have AI Agents that can execute entire workflows, like helping with client onboarding processes, reviewing and approving invoices, and more. The potential for economic value creation here is much higher as these agents will have access to critical corporate knowledge to do their work, often will be line of business and industry specific, contain proprietary context about their specific workflow, and tie into other existing software and agentic platforms. 4. Finally, when AI Agents act effectively as autonomous workers, this leaves the greatest room for economic value. Imagine an AI Agent that can complete an entire FDA submission process, or review and negotiate a legal contract for you, or code an entire application. These agents will be tuned to custom business processes, contain industry-specific knowledge, have access to proprietary data, often autonomously be able to use tools, and more. You'll be able to very directly measure their productivity in a business process. Ultimately, when AI Agents become near perfect over time (we still have a ways to go!), there’s almost no upper limit on their economic value. As models improve, and as Agents get more context, have proprietary data to work with, can access tools, and become more industry specific, they’ll become insanely powerful.

  • View profile for Rajat Taneja
    Rajat Taneja Rajat Taneja is an Influencer

    President, Technology at Visa

    121,534 followers

    MCP is an MVP if you are exploring ways to supercharge your AI workflows. I am very impressed by the MCP (Model Context Protocol) architecture and proud of the way we have embraced it at Visa to accelerate our GAI work. MCP is an open standard, introduced by Anthropic. It acts like a universal connector, seamlessly linking AI applications to external tools, data, and services. Think of it as Bluetooth for AI – enabling plug and play integrations without multiple, messy connections and custom code.   For companies embracing the power of GAI, MCP is a dream come true. It eliminates the headache of building bespoke API integration for every tool, letting AI agents access resources like file systems, wikis, shared drives, databases etc in real time. This means your AI can pull custom data, automate tasks or analyze reports instantly. As an early adopter, we are already using MCP to streamline workflows and with 1000s of community built MCP servers, the eco system is exploding.   My advice to those beginning their MCP journey – start small. Identify a repetitive task (like updating CRM records or generating analysis). Setup an MCP server for your tool or service (many are prebuilt), connect it to your AI client and watch the magic happen. Experiment, scale, and explore the open-source MCP community for inspiration. Once you start using MCP, you will see a step function increase in your innovation velocity.

Explore categories