
Using Generative AI & Machine Learning in the Enterprise
San Francisco + Virtual | November 29

Amey Dharwadker
Engineering Manager, Machine Learning at Meta
Amey Porobo Dharwadker is a Machine Learning Engineering Manager at Meta (Facebook), where he supports the Video Recommendations Ranking team that develops personalization models used by billions of users globally. At Facebook, he has been instrumental in driving a significant increase in user engagement and revenue for the company through his work on News Feed, Ads and Videos ranking organizations. He has published several international publications and patents in the fields of recommender systems and machine learning and has served as a program committee member and reviewer for top-tier ML conferences and journals. In addition, he is involved in mentoring students, engineers, data scientists and early-stage companies through his participation in hackathons, angel syndicates and startup accelerators. As a thought leader, he has been invited to speak at events to provide expert insight on AI and ML and has served as a judge for renowned technology competitions.
Meet the speaker at the San Francisco Data Science Conference on November 29! Book your pass here https://www.datascience.salon/san-francisco/
Watch live: November 29 @ 4:35PM – 5:05PM ET
Building Next-Gen Recommender Systems with Large Language Models: Strategies and Insights
In today’s data-driven landscape, recommender systems have become ubiquitous, providing personalized user experiences. However, traditional recommendation models often fall short in capturing the nuances of user preferences and catering to diverse user interests. This talk delves into the transformative power of Large Language Models (LLMs) and how they are reshaping the future of recommender system personalization. We offer a comprehensive exploration of integration strategies for LLMs, uncovering where and how they seamlessly fit into the modern recommendation ecosystem. We also discuss practical challenges and present opportunities that arise when implementing LLMs, providing industry-specific insights and solutions. You will gain a deep understanding of techniques like fine-tuning and prompt tuning, demonstrating their power to unlock the full potential of LLMs in recommendation systems. By the end of this talk, you’ll be equipped with the strategies to build next-generation recommender systems that provide highly personalized and efficient recommendations through the strategic use of large language models.
Key takeaways for attendees:
1. Learn how Large Language Models (LLMs) can offer more tailored user experiences in next-gen recommender systems.
2. Understand where and how to seamlessly integrate LLMs into recommendation pipelines, along with practical insights and solutions for industry-specific challenges that arise.
3. Discover practical techniques like fine-tuning and prompt tuning to unlock the full potential of LLMs in modern recommender systems.