Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    156,214 followers

    Last week, I heard from a super impressive customer who has cracked the code on how to give salespeople something they’ve always wanted: more selling time. Here’s how he transformed their process. This customer runs the full B2B sales motion at an awesome printing business based in the U.S. For years, his team divided their time across six key areas: 1. Task prioritization 2. Meeting prep 3. Customer responses 4. Prospecting 5. Closing deals 6. Sales strategy Like every sales leader I know, he wants his team to spend most of their time on #5 and #6 — closing deals and sales strategy. But together, those only made up about 30% of their week. (Hearing this gave me flashbacks to my time in sales…and all that admin tasks 😱) Now, his team uses AI across the sales process to compress the amount of time spent on #1-4: 1. Task prioritization → AI scores leads and organizes daily tasks 2. Meeting prep → AI surfaces insights from calls and contact records before meetings 3. Customer responses → Breeze Customer Agent instantly answers customer questions 4. Prospecting → Breeze Prospecting Agent automatically researches accounts and books meetings The result? Higher quantity of AI-powered work: More prospecting. More pipeline.  Higher quality of human-led work: More thoughtful conversations. Sharper strategy. This COO's story made my week. It's a reminder of just how big a shift we're going through – and why it’s such an exciting time to be in go-to-market right now.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,917 followers

    HUGE AI LEGAL NEWS! The European Data Protection Board (EDPB) has published its much anticipated Opinion on AI and data protection. The opinion looks at 1) when and how AI models can be considered anonymous, 2) whether and how legitimate interest can be used as a legal basis for developing or using AI models, and 3) what happens if an AI model is developed using personal data that was processed unlawfully. It also considers the use of first and third-party data. The opinion also addresses the consequences of developing AI models with unlawfully processed personal data, an area of particular concern for both developers and users. The EDPB clarifies that supervisory authorities are empowered to impose corrective measures, including the deletion of unlawfully processed data, retraining of the model, or even requiring its destruction in severe cases. On the issue of anonymity, the opinion grapples with the question of whether AI models trained on personal data can ever fully transcend their origins to be considered anonymous. The EDPB highlights that merely asserting that an AI model does not process personal data is insufficient. Supervisory authorities (SAs) must assess claims of anonymity rigorously, considering whether personal data has been effectively anonymised in the model and whether risks such as re-identification or membership inference attacks have been mitigated. For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organisational measures to prevent re-identification. On legitimate interest as a legal basis for AI, the opinion offers detailed guidance for both development and deployment phases. Legitimate interest under Article 6(1)(f) GDPR requires meeting three cumulative conditions: pursuing a legitimate interest, demonstrating that processing is necessary to achieve that interest, and ensuring the processing does not override the fundamental rights and freedoms of data subjects. For third-party data, the opinion emphasises that the absence of a direct relationship with the data subjects necessitates stronger safeguards, including enhanced transparency, opt-out mechanisms, and robust risk assessments. The opinion’s findings stress that the balancing test under legitimate interest must consider the unique risks posed by AI. These include discriminatory outcomes, regurgitation of personal data by generative AI models, and the broader societal risks of misuse, such as through deepfakes or misinformation campaigns. The opinion also provides examples of mitigating measures that could tip the balance in favour of controllers, such as pseudonymisation, output filters, and voluntary transparency initiatives like model cards and annual reports. The implications for developers are significant: compliance failures in the development phase can render an entire AI system non-compliant, leading to legal and operational challenges.

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,300+ participants), Author of Luiza’s Newsletter (87,000+ subscribers), Mother of 3

    121,578 followers

    🚨 [EU REGULATION] The Data Act became law, and most people have NO IDEA of what it means in practice. The EU has recently published an explainer; here's what you need to know: 1️⃣ Data access "The Data Act enables users of connected products (e.g. connected cars, medical and fitness devices, industrial or agricultural machinery) and related services (i.e. anything that would make a connected product behave in a specific manner, such as an app to adjust the brightness of lights, or to regulate the temperature of a fridge) to access the data that they co-create by using the connected products/ related services." 2️⃣ Data sharing "The Data Act introduces rules for situations where a business (‘data holder’) has a legal obligation under EU or national law to make data available to another business (‘data recipient’), including in the context of IoT data. Notably, the data-sharing terms and conditions must be fair, reasonable and non-discriminatory. As an incentive to data sharing, data holders that are obliged to share data may request ‘reasonable compensation’ from the data recipient." 3️⃣ Private-to-public sharing "Data held by private entities may be essential for a public sector body to undertake a task of public interest. Chapter V of the Data Act allows public sector bodies to access such data, under certain terms and conditions, where there is an exceptional need. (...) Situations of exceptional need include both public emergencies (such as major natural or human-induced disasters, pandemics and cybersecurity incidents) and non-emergency situations (for example, aggregated and anonymised data from drivers’ GPS systems could be used to help optimise traffic flows). The Data Act will ensure that public authorities have access to such data in a timely and reliable manner, without imposing an undue administrative burden on businesses." 4️⃣ Interoperability "(...) customers of data processing services (including cloud and edge services) should be able to switch seamlessly from one provider to another. (...) The Data Act will make switching free, fast and fluid. This will benefit customers, who can freely choose the services that best meet their needs, as well as providers, who will benefit from a larger pool of customers" ➡ Comments: ➵ The Data Act focuses on facilitating data access by people, companies, and governments. Among its goals is to increase fairness and competition in the EU, and give people more control over their data. ➵ It doesn't mention AI once (although it mentions machine learning). Still, it will directly impact AI companies, as connected devices are often AI-powered. It will be interesting to observe the intersection between the Data Act and the AI Act. ➡ Read the Data Act explainer below. 🔥 To stay up to date with the latest developments in AI & Tech policy, compliance & regulation, join 34,500+ people who subscribe to my weekly newsletter (link below). #DataAct #TechRegulation #IoT #AI #AIGovernance

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel @ Malbek - CLM for Enterprise | Adjunct Professor of Law | Author of The Legal Tech Ecosystem | Legal Tech Educator | Fastcase 50 (2022)

    45,827 followers

    As a veteran SaaS lawyer, I've watched Data Processing Agreements (DPAs) evolve from afterthoughts to deal-breakers. Let's dive into why they're now non-negotiable and what you need to know: A) DPA Essentials Often Overlooked: -Subprocessor Management: DPAs should detail how and when clients are notified of new subprocessors. This isn't just courteous - it's often legally required. -Cross-Border Transfers: Post-Schrems II, mechanisms for lawful data transfers are crucial. Standard Contractual Clauses aren't a silver bullet anymore. -Data Minimization: Concrete steps to ensure only necessary data is processed. Vague promises don't cut it. -Audit Rights: Specific procedures for controller-initiated audits. Without these, you're flying blind on compliance. -Breach Notification: Clear timelines and processes for reporting data breaches. Every minute counts in a crisis. B) Why Cookie-Cutter DPAs Fall Short: -Industry-Specific Risks: Healthcare DPAs need HIPAA provisions; fintech needs PCI-DSS compliance clauses. One size does not fit all. -AI/ML Considerations: Special clauses for automated decision-making and profiling are essential as AI becomes ubiquitous. -IoT Challenges: Addressing data collection from connected devices. The 'Internet of Things' is a privacy minefield. -Data Portability: Clear processes for returning data in usable formats post-termination. Don't let your data become a hostage. -Privacy by Design: Embedding privacy considerations into every aspect of data processing. It's not just good practice - it's the law. In 2024, with GDPR fines hitting €1.4 billion, generic DPAs are a liability, not a safeguard. As AI and IoT reshape data landscapes, DPAs must evolve beyond checkbox exercises to become strategic tools. Remember, in the fast-paced tech industry, knowledge of these agreements isn't just useful – it's essential. They're not just legal documents – they're the foundation for innovation and collaboration in our digital age. Pro tip: Review your DPAs quarterly. The data world moves fast - your agreements should keep pace. Pay special attention to changes in data protection laws, new technologies you're adopting, and shifts in your data processing activities. Clear, well-structured DPAs prevent disputes and protect all parties' interests. What's the trickiest DPA clause you've negotiated? Share your war stories below. #legaltech #innovation #law #business #learning

  • View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    223,589 followers

    Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.  2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro  -> RoboCasa produces N (varying visuals)  -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,331,914 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    694,908 followers

    As AI projects grow in complexity, having a robust architecture becomes crucial. Here's a battle-tested structure that has helped our team scale effectively: Key Features: • Modular architecture with clean separation of concerns • Multi-provider LLM support (OpenAI, Anthropic, Claude) • Advanced prompt engineering and chain management • Built-in rate limiting and caching • Comprehensive logging and monitoring Core Components: 1. /config - Centralized configuration management 2. /src/llm - Abstracted LLM clients for provider flexibility 3. /src/prompt_engineering - Reusable prompt templates and chains 4. /utils - Shared utilities for rate limiting, caching, and logging Why This Matters: • Reduced technical debt through organized code structure • Easier onboarding for new team members • Faster experimentation with different LLM providers • Production-ready error handling and monitoring • Scalable architecture that grows with your needs 🔗 Check out the full structure on GitHub: https://lnkd.in/etVgfByc

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,502,282 followers

    ♻️ Recycling, reimagined. I came across Ameru’s AI Smart Bin — and it made me realize something we rarely talk about in sustainability: We don’t fail to recycle because we don’t care. We fail because the friction is too high. This bin doesn’t just collect waste. It sees what you throw, sorts it automatically, and even gives you real-time feedback. The results? ✅ 95%+ sorting accuracy ✅ Analytics that show you how to reduce waste ✅ ROI in under 2 years 👉 Here’s the hidden insight: Let’s be honest: recycling is broken. Most of us want to recycle, but the system is designed for failure — too much friction, too many rules. The real innovation isn’t in AI or edge computing. It’s in making sustainability invisible. No guilt, no extra steps — just default behavior upgraded. 💡 Actionable thought: Whether you’re building tech, a product, or even a habit, ask yourself — how can I make the right choice feel effortless? Because effort scales linearly. But effortlessness? That scales exponentially. PS: Imagine when every trash bin becomes a data point in the circular economy. 👉 Do you think this kind of “invisible innovation” could transform how we recycle at home and at work? #GreenTech #AI #Innovation #Sustainability #CircularEconomy

  • View profile for Damien Benveniste, PhD
    Damien Benveniste, PhD Damien Benveniste, PhD is an Influencer

    Founder @ TheAiEdge | Follow me to learn about Machine Learning Engineering, Machine Learning System Design, MLOps, and the latest techniques and news about the field.

    173,045 followers

    The TikTok recommender system is widely regarded as one of the best in the world at the scale it operates at. It can recommend videos or ads, and even the other big tech companies could not compete. Recommending on a platform like TikTok is tough because the training data is non-stationary as a user's interest can change in a matter of minutes and the number of users, videos, and ads keeps changing. The predictive performance of a recommender system on a social media platform deteriorates in a matter of hours, so it needs to be updated as often as possible. TikTok built a streaming engine to ensure the model is continuously trained in an online manner. The model server generates features for the model to recommend videos, and in return, the user interacts with the recommended items. This feedback loop leads to new training samples that are immediately sent to the training server. The training server holds a copy of the model, and the model parameters are updated in the parameter server. Every minute, the parameter server synchronizes itself with the production model. The recommendation model is several terabytes in size, so it is very slow to synchronize such a big model across the network. That is why the model is only partially updated. The leading cause of non-stationary (concept drift) comes from the sparse variables (users, videos, ads, etc.) that are represented by embedding tables. When a user interacts with a recommended item, only the vectors associated with the user and the item get updated, as well as some of the weights on the network. Therefore, only the updated vectors get synchronized on a minute basis, and the network weights are synchronized on a longer time frame.  Typical recommender systems use fixed embedding tables, and the categories of the sparse variables get assigned to a vector through a hash function. Typically, the hash size is smaller than the number of categories, and multiple categories get assigned to the same vector. For example, multiple users share the same vector. This allows us to deal with the cold start problem for new users, and it puts a constraint on the maximum memory that the whole table will use. But this also tends to reduce the performance of the model because user behaviors get conflated. Instead, TikTok uses dynamic embedding sizes such that new users can be added to their own vector. They use a collisionless hashing function so each user gets its own vector. Low-activity users will not influence the model performance that much, so they dynamically remove those low-occurrence IDs as well as stale IDs. This keeps the embedding table small while preserving the quality of the model. Here is the TikTok paper: https://lnkd.in/g9fA62GD! #machinelearning #datascience #artificialintelligence -- 👉 Learn more Machine Learning on my website: https://www.TheAiEdge.io --

  • View profile for Jason Miller
    Jason Miller Jason Miller is an Influencer

    Supply chain professor helping industry professionals better use data

    60,302 followers

    I continue to read many folks who argue that manufacturing job loss in Midwest was driven largely by trade liberalization with China. This is not correct (and has huge implications for tariff policy). As evidence, consider two charts from Acemoglu & Restrepo (2017; https://lnkd.in/gh7AXtA8). Thoughts: •The top chart shows the exposure of workers in different commuter zones (~722 in the USA) to the adoption of industrial robots per thousand worker from 1993 - 2007. The dark brown region is the most affected area and covers Indiana, Ohio, Western Pennsylvania, Wisconsin, and especially my state of Michigan. The reason: motor vehicle and parts production disproportionately adopted industrial robots. •Bottom chart shows exposure of workers to import competition from China over the same period. Here we see the center of gravity shifts to the South due to textiles and furniture production. •As Richard Baldwin has noted, the joint impact of automation and globalization, which he terms "globotics", was incredibly disruptive to blue collar workers. The challenge is tariffs don't address either issue. Implication: increased automation and adoption of industrial robots by themselves mean that manufacturing payrolls have practically no chance of increasing back towards late 1990s levels (https://lnkd.in/g5Um6mt9). As such, for folks wanting to examine any impacts of tariffs on domestic manufacturing, focus instead on industrial production of manufacturing ex hi-tech products [due to measurement issues]. Those data can be found at https://lnkd.in/garxnDfa. As of today, manufacturing output is at the same level we saw in late 2019 just before COVID-19 hit. #supplychain #economics #markets #shipsandshipping #freight #manufacturing

Explore categories