Product Value Creation

Explore top LinkedIn content from expert professionals.

  • View profile for Brett Mathews
    Brett Mathews Brett Mathews is an Influencer

    Editor @ Apparel Insider | Editorial, Copywriting

    44,780 followers

    STUDY FINDS COST PER WEAR INFORMATION SHIFTS SHOPPERS TO QUALITY: A new study published in Psychology & Marketing offers a fascinating look at what fashion drives fashion purchasing decisions. Researchers from the University of Bath and Cambridge University found that simply showing consumers the cost per wear (CPW) of garments (price divided by the number of times an item can be worn) can shift preferences away from cheap, low-quality clothing toward higher-priced, longer-lasting options. The findings draw on behavioural psychology to reveal that people respond more to perceived 'economic value' than to abstract sustainability messages. When shoppers could compare CPW between garments, and especially when figures were backed by trusted certification, they were far more likely to choose quality over quantity. The authors suggest CPW could be a powerful tool for brands and policymakers seeking to reframe sustainability as smart spending. Full story in comments.

  • View profile for Rahul Agarwal

    Staff ML Engineer | Meta, Roku, Walmart | 1:1 @ topmate.io/MLwhiz

    44,561 followers

    Few Lessons from Deploying and Using LLMs in Production Deploying LLMs can feel like hiring a hyperactive genius intern—they dazzle users while potentially draining your API budget. Here are some insights I’ve gathered: 1. “Cheap” is a Lie You Tell Yourself: Cloud costs per call may seem low, but the overall expense of an LLM-based system can skyrocket. Fixes: - Cache repetitive queries: Users ask the same thing at least 100x/day - Gatekeep: Use cheap classifiers (BERT) to filter “easy” requests. Let LLMs handle only the complex 10% and your current systems handle the remaining 90%. - Quantize your models: Shrink LLMs to run on cheaper hardware without massive accuracy drops - Asynchronously build your caches — Pre-generate common responses before they’re requested or gracefully fail the first time a query comes and cache for the next time. 2. Guard Against Model Hallucinations: Sometimes, models express answers with such confidence that distinguishing fact from fiction becomes challenging, even for human reviewers. Fixes: - Use RAG - Just a fancy way of saying to provide your model the knowledge it requires in the prompt itself by querying some database based on semantic matches with the query. - Guardrails: Validate outputs using regex or cross-encoders to establish a clear decision boundary between the query and the LLM’s response. 3. The best LLM is often a discriminative model: You don’t always need a full LLM. Consider knowledge distillation: use a large LLM to label your data and then train a smaller, discriminative model that performs similarly at a much lower cost. 4. It's not about the model, it is about the data on which it is trained: A smaller LLM might struggle with specialized domain data—that’s normal. Fine-tune your model on your specific data set by starting with parameter-efficient methods (like LoRA or Adapters) and using synthetic data generation to bootstrap training. 5. Prompts are the new Features: Prompts are the new features in your system. Version them, run A/B tests, and continuously refine using online experiments. Consider bandit algorithms to automatically promote the best-performing variants. What do you think? Have I missed anything? I’d love to hear your “I survived LLM prod” stories in the comments!

  • View profile for Alpana Razdan
    Alpana Razdan Alpana Razdan is an Influencer

    Country Manager: Falabella | Co-Founder: AtticSalt | Built Operations Twice to $100M+ across 5 countries |Entrepreneur & Business Strategist | 15+ Years of experience working with 40 plus Global brands.

    155,868 followers

    The most expensive mistake in business is assuming your customers will never change. Last year, something shifted in Indian retail. Gen Z (377 million) overtook millennials (356 million) to become our largest consumer group, influencing $40-45 billion worth of apparel and footwear purchases. But they're not shopping at the stores we built for them. [Et Retail] Brands watched their growth collapse in just 12 months. → ZARA fell from 40% to 8% growth, [Et Retail] → Levi Strauss & Co. crashed from 54% to 4% growth [Et Retail] → H&M dropped from 40% to 11% growth [Et Retail] Here's why the growth has slowed down: 📌 Gen Z discovered new brands like Freakins and Bonkers Corner, offering trendy clothes at ₹500-800 📌 They chose self-expression over brand loyalty 📌 70% of their shopping moved online, heavily influenced by Instagram 📌 They demanded inclusive sizing (XS to XXL) and unisex options that legacy brands ignored Take FREAKINS, which clocked ₹25 crore in FY2023, or Bonkers.corner, clocked ₹100 crore. [The Economic Times] [Et Retail] These brands understood what Gen Z wanted: crop tops, baggy clothes, Korean pants, and oversized tees at prices that let them experiment with three different outfits daily. Body positivity isn't a marketing campaign for this generation. It's how they think. When they couldn't find the sizes or styles they wanted at premium stores priced at ₹1,200-1,500, they simply went elsewhere. Myntra saw the shift and launched FWD with ₹500 price points. The result was explosive: 100% year-on-year growth and 16 million Gen Z users, who now represent one in three e-lifestyle shoppers. [Et Retail] Legacy brands bet that Gen Z would "grow up" and pay premium prices. Instead, 377 million young Indians chose values over logos. The most expensive mistake in business? Assuming your customers will never change. What changes in your customer base have surprised you recently?

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    600,271 followers

    Most people still think of LLMs as “just a model.” But if you’ve ever shipped one in production, you know it’s not that simple. Behind every performant LLM system, there’s a stack of decisions, about pretraining, fine-tuning, inference, evaluation, and application-specific tradeoffs. This diagram captures it well: LLMs aren’t one-dimensional. They’re systems. And each dimension introduces new failure points or optimization levers. Let’s break it down: 🧠 Pre-Training Start with modality. → Text-only models like LLaMA, UL2, PaLM have predictable inductive biases. → Multimodal ones like GPT-4, Gemini, and LaVIN introduce more complex token fusion, grounding challenges, and cross-modal alignment issues. Understanding the data diet matters just as much as parameter count. 🛠 Fine-Tuning This is where most teams underestimate complexity: → PEFT strategies like LoRA and Prefix Tuning help with parameter efficiency, but can behave differently under distribution shift. → Alignment techniques- RLHF, DPO, RAFT, aren’t interchangeable. They encode different human preference priors. → Quantization and pruning decisions will directly impact latency, memory usage, and downstream behavior. ⚡️ Efficiency Inference optimization is still underexplored. Techniques like dynamic prompt caching, paged attention, speculative decoding, and batch streaming make the difference between real-time and unusable. The infra layer is where GenAI products often break. 📏 Evaluation One benchmark doesn’t cut it. You need a full matrix: → NLG (summarization, completion), NLU (classification, reasoning), → alignment tests (honesty, helpfulness, safety), → dataset quality, and → cost breakdowns across training + inference + memory. Evaluation isn’t just a model task, it’s a systems-level concern. 🧾 Inference & Prompting Multi-turn prompts, CoT, ToT, ICL, all behave differently under different sampling strategies and context lengths. Prompting isn’t trivial anymore. It’s an orchestration layer in itself. Whether you’re building for legal, education, robotics, or finance, the “general-purpose” tag doesn’t hold. Every domain has its own retrieval, grounding, and reasoning constraints. ------- Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    218,231 followers

    🧒🏽 Designing For Gen Z. With frequent myths and actual behavior patterns that go beyond heavy use of social media (scroll down for the newsletter ↓) ✅ Gen Zs are born roughly between 1995 and early 2010s. ✅ Most diverse generation in terms of race, ethnicity, identity. ✅ With a very broad view of diversity, focused on inclusivity. ✅ Value experiences, principles, social stand over possessions. 🚫 Not homogenous: have diverse perspectives and opinions. 🚫 Not a “passive” generation → caring, proactive, eager to work. ✅ Large parts of Gen-Z aren’t mobile-first, but mobile-only. ✅ To some, the main search engine is YouTube, not Google. ✅ Trust only verified customer reviews, influencers, friends. ✅ Used to follow events live as they unfold → little patience. ✅ Sustainability, reuse, work/life balance are top priorities. ✅ Prefer social login as the fastest authentication method. ✅ Typically ignore or close cookie banners, without consent. ✅ Rely on social proof, honest reviews/photos, authenticity. ✅ Most likely generation to provide a referral to a product. ✅ Typically turn on subtitles for videos by default. Today, the oldest segments of Gen-Z are in their late 20s, and the youngest are in the middle school. They are often dismissed as a “slow, passive” generation that doesn’t really care about much. But research shows it couldn't be further from the truth. Unsurprisingly, social media is a huge part of the Gen Z lifestyle. But because they are sceptical of brands by default, they rely on social circles, influencers and peers as main research channels. They might not be great at spotting what’s real and what’s not, but they are highly selective about their sources. We shouldn’t discount Gen-Z as a generation with poor attention spans and urgent needs for instant gratification. We can tell a damn good story. Captivate and engage. Encourage critical thinking. Provide sources. Make them think. Gen-Z is curious and interested, and they want to be challenged and to succeed. So support that. Remove clutter and anything that gets in the way. Support intrinsic motivation, not extrinsic motivation. Invest in good content design. Aim for the opposite of perfect. Say what you think and do what you promise. Reflect the real world with real people using real products, however imperfect they are. In times when there is so much fake, exaggerated, dishonest and AI-generated content, it might be just enough to be perceived as authentic, trustworthy and attention-worthy by highly selective and very demanding Gen Z. --- 👋🏼 I'm Vitaly Friedman, and you can find useful UX resources on my profile. I’m also running “Measure UX” 🚀 (https://measure-ux.com/) with a friendly video course and live UX training. Use code LINKEDIN for a friendly discount. 😊 #ux #accessibility

  • View profile for Prof. Unnat P Pandit

    CGPDTM RoC&GI, DPIIT MoC&I | Professor of IP Innovation & Entrepreneurship @JNU Formerly, Prog.Dir-AIM, NITI Aayog | OSD to CIM | Member of IPR Think Tank | Corporate IPR Professional, Member NBA

    22,666 followers

    Government's commitment to protect its rich cultural heritage just got a major boost 🇮🇳... Registration and renewal fees for GI have been slashed—making it easier for artisans and community groups to claim their rights! The significant reduction in GI fees (from ₹5,000 to ₹1,000 for registration and ₹3,000 to ₹500 for renewal) shall certainly enhance accessibility for rural artisans, MSMEs, and community organizations. Lower costs remove major barriers for grassroots producers, enabling more communities to register their unique products and secure intellectual property rights. This reform aligns with India's Vision 2030—to register 10,000 GI products—which will catalyse broader cultural protection and economic gains Increased registrations will advance GI awareness nationwide, leading to improved marketing, better consumer protection, and fairer benefit-sharing among community stakeholders. GI tags safeguard authentic crafts, foods, and traditions, ensuring only genuine products reach consumers and supporting rural development. Appeal / Suggestion: LLM or LLB Students must opt one of their Winter/Summer project on facilitating GI application (including Advocate for their community services). Helping the community on GI History , Uniqueness, Silent features of proposed GI Products/Specifications to describe in the applications. So much of learning for them to be rooted with Indian Culture and Community. I am sure the kind of joy and happiness they get through facilitating GI applications shall immensely benefit the grassroots community. We are happy to plan a one day workshop in different IP Offices if we receive interest from the legal fraternity. Let’s empower our communities and unlock the true value of our culture! #Geographicalindication #GITag #IndianHeritage #CommunityIP #CultureProtection

  • View profile for Tomasz Tunguz
    Tomasz Tunguz Tomasz Tunguz is an Influencer
    402,828 followers

    Product managers & designers working with AI face a unique challenge: designing a delightful product experience that cannot fully be predicted. Traditionally, product development followed a linear path. A PM defines the problem, a designer draws the solution, and the software teams code the product. The outcome was largely predictable, and the user experience was consistent. However, with AI, the rules have changed. Non-deterministic ML models introduce uncertainty & chaotic behavior. The same question asked four times produces different outputs. Asking the same question in different ways - even just an extra space in the question - elicits different results. How does one design a product experience in the fog of AI? The answer lies in embracing the unpredictable nature of AI and adapting your design approach. Here are a few strategies to consider: 1. Fast feedback loops : Great machine learning products elicit user feedback passively. Just click on the first result of a Google search and come back to the second one. That’s a great signal for Google to know that the first result is not optimal - without tying a word. 2. Evaluation : before products launch, it’s critical to run the machine learning systems through a battery of tests to understand in the most likely use cases, how the LLM will respond. 3. Over-measurement : It’s unclear what will matter in product experiences today, so measuring as much as possible in the user experience, whether it’s session times, conversation topic analysis, sentiment scores, or other numbers. 4. Couple with deterministic systems : Some startups are using large language models to suggest ideas that are evaluated with deterministic or classic machine learning systems. This design pattern can quash some of the chaotic and non-deterministic nature of LLMs. 5. Smaller models : smaller models that are tuned or optimized for use cases will produce narrower output, controlling the experience. The goal is not to eliminate unpredictability altogether but to design a product that can adapt and learn alongside its users. Just as much as the technology has changed products, our design processes must evolve as well.

  • View profile for Santiago Valdarrama

    Computer scientist and writer. I teach hard-core Machine Learning at ml.school.

    120,394 followers

    Some challenges in building LLM-powered applications (including RAG systems) for large companies: 1. Hallucinations are very damaging to the brand. It only takes one for people to lose faith in the tool completely. Contrary to popular belief, RAG doesn't fix hallucinations. 2. Chunking a knowledge base is not straightforward. This leads to poor context retrieval, which leads to bad answers from a model powering a RAG system. 3. As information changes, you also need to change your chunks and embeddings. Depending on the complexity of the information, this can become a nightmare. 4. Models are black boxes. We only have access to modify their inputs (prompts), but it's hard to determine cause-effect when troubleshooting (e.g., Why is "Produce concise answers" working better than "Reply in short sentences"?) 5. Prompts are too brittle. Every new version of a model can cause your previous prompts to stop working. Unfortunately, you don't know why or how to fix them (see #4 above.) 6. It is not yet clear how to reliably evaluate production systems. 7. Costs and latency are still significant issues. The best models out there cost a lot of money and are very slow. Cheap and fast models have very limited applicability. 8. There are not enough qualified people to deal with these issues. I cannot highlight this problem enough. You may encounter one or more of these problems in a project at once. Depending on your requirements, some of these issues may be showstoppers (hallucinating direction instructions for a robot) or simple nuances (support agent hallucinating an incorrect product description.) There's still a lot of work to do until these systems mature to a point where they are viable for most use cases.

  • View profile for Jenny Fielding
    Jenny Fielding Jenny Fielding is an Influencer

    Co-founder + Managing Partner at Everywhere Ventures 🚀

    48,732 followers

    For decades, 'legal tech' meant one thing: building complex, expensive software to help big law firms bill more hours, more efficiently. The entire industry was built to serve the lawyer. That era is officially over. The real, multi-trillion dollar opportunity was never about making lawyers slightly more productive, it was about serving the millions of small businesses and individuals who couldn't afford them in the first place. A new wave of startup founders understands that the future isn't about selling software to law firms, but about delivering legal outcomes to everyone else. This shift is happening in real-time so when I met Andrew Guzman at OpenLaw, with a mission of making legal services accessible and on-demand, I was excited to get involved. Their momentum highlights a broader trend we're seeing. Devalued Currency: On-premise enterprise software sold in multi-year contracts to the top 200 law firms. New Currency: On-demand, transparently-priced legal services delivered through a marketplace that empowers both the client and the independent lawyer. Here’s how the next generation of legal tech founders are building: ✔️They Focus on the Client Experience, Not the Lawyer Workflow. The old guard built tools to optimize tasks within a law firm. The next gen are obsessed with the client's journey. They ask: "How can we get a small business a simple, fixed-fee contract review in 24 hours?" This client-centric obsession, rather than lawyer-centric optimization, is the single biggest mindset shift in the industry. ✔️ They Use AI for Access, Not Just Efficiency. First-gen legal tech used AI to help a $1k/hour lawyer find a document 10% faster. The new generation uses AI to automate routine tasks, enabling a marketplace of lawyers to offer services at a price point small businesses can actually afford. AI isn't a tool to enhance the old model, it's a weapon to unlock a completely new market. ✔️ They Sell Predictability First, Legal Services Second. The biggest barrier for a small business isn't a lack of legal documents, it's the paralyzing fear of surprise bills and hiring the wrong expert. Instead the new gen build products that offer fixed-fee packages, transparent reviews and clear project scopes, ensuring a customer knows the exact cost and deliverable upfront. They understand that what they’re really selling is predictability. The future of legal tech doesn't look like a piece of software. It looks like a simple, elegant experience that finally gives businesses and individuals the expert help they really need. A huge congrats to the OpenLaw team for closing $3.5M and leading the charge. Let's go! 🚀 🚀 🚀 The LegalTech Fund, Wisdom Ventures, Mindful Venture Capital, Flint Capital, Slauson & Co., Techstars, Everywhere Ventures

  • View profile for David Karp

    Chief Customer Officer at DISQO | Customer Success + Growth Executive | Building Trusted, Scalable Post-Sales Teams | Fortune 500 Partner | AI Embracer

    31,569 followers

    🚀 From Time-to-Value → Outcomes → Advocacy Every Chief Customer Officer , Next-Sales/Post-Sales, and Customer Success leader I talk to is wrestling with the same reality: 📉 Customers don’t just buy software and services. 📈 They buy outcomes. And here’s the hard truth: • Faster Time-to-Value (TTV) drives renewals. • Clear outcome realization fuels expansions. • Documented impact earns advocacy: references, referrals, and case studies. That’s the flywheel: TTV → Outcomes → Advocacy. Miss one step, and the whole growth engine slows down. 🛠️ Here are 5 actions I've been taking this year (and you can too): 1️⃣ Make TTV an Executive-level (and if possible, Board-level) metric. Don’t just measure logins, onboarding steps, or deliverables. Report on how quickly customers realize their first value, and present it to your executive team alongside ARR. 2️⃣ Redesign onboarding for outcomes. Shift away from “task completion” checklists. Build your onboarding around the first ‘aha’ moment that proves your product works. And if you don't know what the first moment of truth is, make it a priority to discover it now. 3️⃣ Run Executive Value Reviews. Don’t wait for renewal calls. Meet with decision-makers mid-cycle to demonstrate impact in their language: ROI based on revenue growth, revenue protection, and/or efficiency gains. 4️⃣ Quantify the impact. Adoption metrics are nice. But impact metrics are better: hours saved, costs reduced, revenue generated. Speak in numbers your CFO would applaud. 5️⃣ Operationalize advocacy. Turn happy customers into a systematic engine: reference pools, customer councils, case studies, and peer-to-peer referrals built into your success motion. And this is an area where we (ahem, I) still have the most work to do this year. Just ask our marketing team when they asked me for some strong advocacy statements and quotes.... ⚡ And what about the future? Almost all of us have acknowledged now that Customer Success isn’t just about being “customer-friendly.” It’s about designing a repeatable growth flywheel where every outcome creates the next expansion, and every expansion fuels the next referral. The companies that nail this don’t just retain customers, they earn them, over and over again. #CustomerSuccess #PostSales #NextSales #CustomerExperience #CreateTheFuture #Growth

Explore categories