AIâs ability to make tasks not just cheaper, but also faster, is underrated in its importance in creating business value. For the task of writing code, AI is a game-changer. It takes so much less effort â and is so much cheaper â to write software with AI assistance than without. But beyond reducing the cost of writing software, AI is shortening the time from idea to working prototype, and the ability to test ideas faster is changing how teams explore and invent. When you can test 20 ideas per month, it dramatically changes what you can do compared to testing 1 idea per month. This is a benefit that comes from AI-enabled speed rather than AI-enabled cost reduction. That AI-enabled automation can reduce costs is well understood. For example, providing automated customer service is cheaper than operating human-staffed call centers. Many businesses are more willing to invest in growth than just in cost savings; and, when a task becomes cheaper, some businesses will do a lot more of it, thus creating growth. But another recipe for growth is underrated: Making certain tasks much faster (whether or not they also become cheaper) can create significant new value. I see this pattern across more and more businesses. Consider the following scenarios: - If a lender can approve loans in minutes using AI, rather than days waiting for a human to review them, this creates more borrowing opportunities (and also lets the lender deploy its capital faster). Even if human-in-the-loop review is needed, using AI to get the most important information to the reviewer might speed things up. - If an academic institution gives homework feedback to students in minutes (via autograding) rather than days (via human grading), the rapid feedback facilitates better learning. - If an online seller can approve purchases faster, this can lead to more sales. For example, many platforms that accept online ad purchases have an approval process that can take hours or days; if approvals can be done faster, they can earn revenue faster. This also enables customers to test ideas faster. - If a companyâs sales department can prioritize leads and respond to prospective customers in minutes or hours rather than days â closer to when the customersâ buying intent first led them to contact the company â sales representatives might close more deals. Likewise, a business that can respond more quickly to requests for proposals may win more deals. Iâve written previously about looking at the tasks a company does to explore where AI can help. Many teams already do this with an eye toward making tasks cheaper, either to save costs or to do those tasks many more times. If youâre doing this exercise, consider also whether AI can significantly speed up certain tasks. One place to examine is the sequence of tasks on the path to earning revenue. If some of the steps can be sped up, perhaps this can help revenue growth. [Edited for length; full text: https://lnkd.in/gBCc2FTn ]
Technology
Explore top LinkedIn content from expert professionals.
-
-
As technology becomes the backbone of modern business, understanding cybersecurity fundamentals has shifted from a specialized skill to a critical competency for all IT professionals. Hereâs an overview of the critical areas IT professionals need to master: Phishing Attacks  - What it is: Deceptive emails designed to trick users into sharing sensitive information or downloading malicious files.  - Why it matters: Phishing accounts for over 90% of cyberattacks globally.  - How to prevent it: Implement email filtering, educate users, and enforce multi-factor authentication (MFA). Ransomware  - What it is: Malware that encrypts data and demands payment for its release.  - Why it matters: The average ransomware attack costs organizations millions in downtime and recovery.  - How to prevent it: Regular backups, endpoint protection, and a robust incident response plan. Denial-of-Service (DoS) Attacks  - What it is: Overwhelming systems with traffic to disrupt service availability.  - Why it matters: DoS attacks can cripple mission-critical systems.  - How to prevent it: Use load balancers, rate limiting, and cloud-based mitigation solutions. Man-in-the-Middle (MitM) Attacks  - What it is: Interception and manipulation of data between two parties.  - Why it matters: These attacks compromise data confidentiality and integrity.  - How to prevent it: Use end-to-end encryption and secure protocols like HTTPS. SQL Injection  - What it is: Exploitation of database vulnerabilities to gain unauthorized access or manipulate data.  - Why it matters: Itâs one of the most common web application vulnerabilities.  - How to prevent it: Validate input and use parameterized queries. Cross-Site Scripting (XSS)  - What it is: Injection of malicious scripts into web applications to execute on usersâ browsers.  - Why it matters: XSS compromises user sessions and data.  - How to prevent it: Sanitize user inputs and use content security policies (CSP). Zero-Day Exploits  - What it is: Attacks that exploit unknown or unpatched vulnerabilities.  - Why it matters: These attacks are highly targeted and difficult to detect.  - How to prevent it: Regular patching and leveraging threat intelligence tools. DNS Spoofing  - What it is: Manipulating DNS records to redirect users to malicious sites.  - Why it matters: It compromises user trust and security.  - How to prevent it: Use DNSSEC (Domain Name System Security Extensions) and monitor DNS traffic. Why Mastering Cybersecurity Matters  - Risk Mitigation: Proactive knowledge minimizes exposure to threats.  - Organizational Resilience: Strong security measures ensure business continuity.  - Stakeholder Trust: Protecting digital assets fosters confidence among customers and partners. The cybersecurity landscape evolves rapidly. Staying ahead requires regular training, and keeping pace with the latest trends and technologies. Â
-
Time to dust off the âOpenAI killed my startupâ t-shirts. OpenAI just put on its big boy pants and entered the enterprise - deliberately this time, not just by osmosis from consumer demand. Announced today: ðï¸ Record mode - Audio-only meeting capture, smart summaries, action items ð Connectors - Access Google Drive, SharePoint, Box, Dropbox, OneDrive from inside ChatGPT ð Deep Research - Pull from HubSpot, Linear, and internal tools via MCP ð Canvas - Turn meetings into documents, tasks, and execution flows OpenAI now has 3 million paying business users, up from 2M just three months ago. Thatâs 1M net new in a quarter. They're signing 9 new enterprises a week. The vision is simple: Stop toggling tabs. ChatGPT doesn't want to be a tool you switch to, but a surface you operate from. Why this matters: âªï¸ Integrations with cloud drives and CRMs mean itâs now context-aware within your businessâs actual knowledge stack - not just the public web. âªï¸ Model Context Protocol support is one of the most important moves - it allows companies to feed ChatGPT real-time context from custom tools, which could unlock vertical-specific agents (e.g., biotech, legal, sales) âªï¸Connectors and MCP support create a moat. Once a company connects its internal data sources and builds workflows atop ChatGPT, switching costs rise sharply. âªï¸ Although Microsoft is a key OpenAI partner, Copilot and ChatGPT are starting to collide. Features like transcription, research, and action items overlap with Copilot for M365. This announcement marks another step in our relentless march toward agentic AI, systems that donât just assist, but observe, reason, and act within real workflows. The battle for the AI-first enterprise stack is officially on. The usual suspects - Google, Anthropic, Microsoft are obviously in the ring but so are Notion, ClickUp, Zoom - all hoping to crack AI-powered productivity. The trillion-dollar question is this: Can a model provider ultimately become the place where work happens, or just the thing that helps it along?
-
AI is not failing because of bad ideas; itâs "failing" at enterprise scale because of two big gaps: ð Workforce Preparation ð Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AIâbecause 70â82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So letâs make it simple - there are 7 phases to securing data for AIâand each phase has direct business risk if ignored. ð¹ Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You canât build scalable AI with data you donât own or canât trace. ð¹ Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. ð¹ Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truckâor on a bicycleâyour choice. ð¹ Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isnât just tech debt. Itâs reputational and regulatory risk. ð¹ Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? Itâs a business asset. You lock your office at nightâdo the same with your models. ð¹ Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harmâwhoâs notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. ð¹ Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes fasterâand so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership
-
What if the real disruption in manufacturing isnât coming from AI, cloud, or automation... but from the uncomfortable realization that weâve been investing in all the wrong things? According to Deloitteâs 2025 Smart Manufacturing Survey, manufacturers are pouring billions into tech. ðð% are allocating over ðð% of their improvement budgets to smart manufacturing. ðð% are prioritizing process automation. The intent is clear. The excitement is real. Butâ¦. I would argue ð°ðâð«ð ð¬ðð¢ð¥ð¥ ð§ð¨ð ð«ðððð².. Not in our culture. Not in our org structures. Not in how we prepare our people. The data exposes the gap. Human capital is the least mature capability in the smart manufacturing stack. Only ðð% of companies have a training and adoption standard. Yet itâs the number one area they say they want to improve. And while ðð% believe smart manufacturing will attract new talent, more than a third say their biggest human capital concern is simply adapting workers to the factory of the future. We like the sound of digital transformation as long as it doesn't slow us down. We like the optics of AI as long as we don't have to redesign how we work. We like talking about the workforce of the future as long as we donât have to train the one we already have. So yes, investment is rising. But if we donât confront the outdated systems and assumptions holding us back, all weâre doing is layering expensive tech on fragile foundations. The biggest barrier to smart manufacturing isnât budget, technology, or even talent. Itâs us. ðð¡ððð¤ ð¨ð®ð ðð¡ð ðð®ð¥ð¥ ð«ðð©ð¨ð«ð: https://lnkd.in/e6_QsJcw ******************************************* ⢠Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends ⢠Ring the ð for notifications!
-
Guide to Building an AI Agent 1ï¸â£ ððµð¼ð¼ðð² ððµð² ð¥ð¶ð´ðµð ððð Not all LLMs are equal. Pick one that: - Excels in reasoning benchmarks - Supports chain-of-thought (CoT) prompting - Delivers consistent responses ð Tip: Experiment with models & fine-tune prompts to enhance reasoning. 2ï¸â£ ðð²ð³ð¶ð»ð² ððµð² ðð´ð²ð»ðâð ðð¼ð»ðð¿ð¼ð¹ ðð¼ð´ð¶ð° Your agent needs a strategy: - Tool Use: Call tools when needed; otherwise, respond directly. - Basic Reflection: Generate, critique, and refine responses. - ReAct: Plan, execute, observe, and iterate. - Plan-then-Execute: Outline all steps first, then execute. ð Choosing the right approach improves reasoning & reliability. 3ï¸â£ ðð²ð³ð¶ð»ð² ðð¼ð¿ð² ðð»ððð¿ðð°ðð¶ð¼ð»ð & ðð²ð®ððð¿ð²ð Set operational rules: - How to handle unclear queries? (Ask clarifying questions) - When to use external tools? - Formatting rules? (Markdown, JSON, etc.) - Interaction style? ð Clear system prompts shape agent behavior. 4ï¸â£ ððºð½ð¹ð²ðºð²ð»ð ð® ð ð²ðºð¼ð¿ð ð¦ðð¿ð®ðð²ð´ð LLMs forget past interactions. Memory strategies: - Sliding Window: Retain recent turns, discard old ones. - Summarized Memory: Condense key points for recall. - Long-Term Memory: Store user preferences for personalization. ð Example: A financial AI recalls risk tolerance from past chats. 5ï¸â£ ðð¾ðð¶ð½ ððµð² ðð´ð²ð»ð ðð¶ððµ ð§ð¼ð¼ð¹ð & ðð£ðð Extend capabilities with external tools: - Name: Clear, intuitive (e.g., "StockPriceRetriever") - Description: What does it do? - Schemas: Define input/output formats - Error Handling: How to manage failures? ð Example: A support AI retrieves order details via CRM API. 6ï¸â£ ðð²ð³ð¶ð»ð² ððµð² ðð´ð²ð»ðâð ð¥ð¼ð¹ð² & ðð²ð ð§ð®ðð¸ð Narrowly defined agents perform better. Clarify: - Mission: (e.g., "I analyze datasets for insights.") - Key Tasks: (Summarizing, visualizing, analyzing) - Limitations: ("I donât offer legal advice.") ð Example: A financial AI focuses on finance, not general knowledge. 7ï¸â£ ðð®ð»ð±ð¹ð¶ð»ð´ ð¥ð®ð ððð ð¢ððð½ððð Post-process responses for structure & accuracy: - Convert AI output to structured formats (JSON, tables) - Validate correctness before user delivery - Ensure correct tool execution ð Example: A financial AI converts extracted data into JSON. 8ï¸â£ ð¦ð°ð®ð¹ð¶ð»ð´ ðð¼ ð ðð¹ðð¶-ðð´ð²ð»ð ð¦ðððð²ðºð (ðð±ðð®ð»ð°ð²ð±) For complex workflows: - Info Sharing: What context is passed between agents? - Error Handling: What if one agent fails? - State Management: How to pause/resume tasks? ð Example: 1ï¸â£ One agent fetches data 2ï¸â£ Another summarizes 3ï¸â£ A third generates a report Master the fundamentals, experiment, and refine and.. now go build something amazing! Happy agenting! ð¤
-
Water-cooled data centers are more energy-efficient than air-cooled data centers. Water can achieve higher thermal performance and lower energy consumption, resulting in increased energy savings and reduced operational costs. At Cerebras Systems all our data centers are all water cooled, which is part of the reason our systems consume so much less power per token than the competition....The benefits of water cooled data centers are... ð Higher Thermal Capacity. Water has a significantly higher heat capacity than air, meaning it can absorb more heat and transport it more heat more efficiently. ð Improved Heat Transfer Water can transfer heat away from components more effectively than air, allowing for a smaller, more efficient cooling system. ð Reduced Fan Power Water cooling eliminates the need for high-powered fans that are typically used in air-cooled systems, resulting in significant energy savings. ð High Density and Flexibility Water cooling allows for higher equipment density in data centers and greater flexibility in thermal management. ð Power Usage Effectiveness Water-cooled data centers achieve lower PUE values, which are a measure of the energy efficiency of a data center. ð Sustainability Reduced energy consumption and the possibility of using heat recovery systems make water cooling a more sustainable choice.
-
AI just helped a couple get pregnant - after 19 years and 15 failed IVF cycles. The breakthrough came with an AI tool built by a team at Columbia University. Itâs called STAR - the worldâs first AI system trained to find sperm that embryologists canât. The husband had azoospermia - a condition where no sperm is visible under the microscope. Dozens of attempts, surgeries, and even overseas experts had failed. But the team at Columbia didnât give up. They spent 5 years building STAR (Sperm Track and Recovery). The system scans 8 million images per hour using a chip and computer vision, then gently isolates viable sperm missed by even the most experienced lab techs. And it worked. â¶ï¸ STAR found 44 sperm in a sample that had been manually searched for two full days. â¶ï¸ That one breakthrough led to a pregnancy that had felt impossible for nearly two decades. â¶ï¸ And it did so without chemicals, donor samples, or invasive extraction methods. For millions of couples dealing with infertility, this is a glimpse of what AI-assisted reproductive medicine could unlock. But more importantly - this shows us what AI in healthtech should be aiming for: Not just more data. Not just smarter models. But real clinical results that change lives. And as a healthtech investor, this is what I look for in AI-driven care: â A clear pain point â A targeted intervention â And a story no one can ignore Whatâs your take - could AI reshape fertility care the way itâs starting to reshape diagnostics and mental health? #entrepreneurship #healthtech #innovation
-
NVIDIA's $7B Mellanox acquisition was actually one of tech's most strategic deals ever. The untold story of the most important company in AI that most people haven't heard of Most people think NVIDIA = GPUs. But modern AI training is actually a networking problem. A single A100 can only hold ~50B parameters. Training large models requires splitting them across hundreds of GPUs. Enter Mellanox. They pioneered RDMA (Remote Direct Memory Access) which lets GPUs directly access memory on other machines with almost no CPU overhead. Before RDMA, moving data between GPUs was a massive bottleneck. The secret sauce is in Mellanox's InfiniBand. While Ethernet does 200-400ns latency, InfiniBand does ~100ns. For distributed AI training where GPUs constantly sync gradients, this 2-3x latency difference is massive. Mellanox didn't just do hardware. Their GPUDirect RDMA software stack lets GPUs talk directly to network cards, bypassing CPU & system memory. This cuts latency another ~30% vs traditional networking stacks. NVIDIA's master stroke: Integrating Mellanox's ConnectX NICs directly into their DGX AI systems. The full stack - GPUs, NICs, switches, drivers - all optimized together. No one else can match this vertical integration. The numbers are staggering: - HDR InfiniBand: 200Gb/s per port - Quantum-2 switch: 400Gb/s per port - End-to-end latency: ~100ns - GPU memory bandwidth matching: ~900GB/s Why it matters: Training SOTA scale models requires: - 1000s of GPUs - Petabytes of data movement - Sub-millisecond latency requirements Without Mellanox tech, it would take literally months longer. The competition is playing catch-up: - Intel killed OmniPath - Broadcom/Ethernet still has higher latency - Cloud providers mostly stuck with RoCE NVIDIA owns the premium AI networking stack Looking ahead: CXL + Mellanox tech will enable even tighter GPU-NIC integration. We'll see dedicated AI networks with sub-50ns latency and Tb/s bandwidth. The networking advantage compounds. In the AI arms race, networking is the silent kingmaker. NVIDIA saw this early. The Mellanox deal wasn't about current revenue - it was about controlling the foundational tech for training next-gen AI. Next time you hear about a new large language model breakthrough, remember: The GPUs get the glory, but Mellanox's networking makes it possible. Sometimes the most important tech is invisible.
-
5 things that arenât super helpful for colleagues affected by layoffs and what Iâm doing instead: 1. Asking them to tell me how I can help â Proactively leaving LinkedIn recommendations and endorsing skills. 2. Spamming them with random job postings I see â Sending them jobs where I can make an introduction to the recruiter, hiring manager, or someone on the team. 3. Sending their info to my recruiter connections with no context â Sharing their profile to my recruiter connections with specific job postings I know they are interested in. 4. Only supporting them privately â Turning on notifications for them on LinkedIn to provide likes, comments, and hype to give their posts a boost in visibility. 5. Immediately jumping to offer feedback â Listening to see if they are looking to solve problems or just need to vent. Itâs normal to be a bit lost and not know what to do, especially if this is your first rodeo (like mine). We have more power than we realize and little tweaks to how we approach things can make a huge difference. Good luck.