Vulnerabilities in LLMs stemming from a leak in GPU local memory. Virtual Talk
Plus: Robots can autonomously cook and give high fives. Video lecture by a Stanford AI lab researcher.
Hello, fellow human! Sophia here with some updates on the talks we will be hosting soon.
Table of Contents:
Upcoming virtual talk on April 18th: Vulnerabilities in LLMs stemming from a leak in GPU local memory.
Video recording of the past talk: Robots have learned how to autonomously cook and give high fives. The lecture by Zipeng Fu from Stanford AI Lab on achieving deployable robotics systems.
Virtual Talk: Vulnerabilities in LLMs Stemming from a Leak in GPU Local Memory
If you are into AI security – this talk is definitely for you. Tyler Sorensen, a security researcher and Assistant Professor at UC Santa Cruz, recently discovered a GPU vulnerability that allows attackers to reconstruct LLM responses with high precision through GPU memory leaks.
His research revealed that this vulnerability affected a wide variety of GPUs, including Apple, AMD, and Qualcomm devices.
In this talk, Tyler will elaborate further on the vulnerability and how to mitigate security risks related to GPU systems.
Learn more details about the upcoming virtual talk and register here.
Video recording of the past talk: Robots have learned how to autonomously cook and give high fives.
In this lecture, Zipeng Fu from Stanford AI Lab focuses on two approaches that the research team is actively working on: Robot Parkour Learning and Imitation Learning – learning from human demonstrations – which is a huge trend right now in robotics.
Let me highlight here some cool results of Imitation Learning.
Imitation Learning – humans use demonstrations in the real world to teach robots to acquire skills.
How to train policy for Imitation Learning?
Data Collection: There are no good quality datasets for robotics. That's why data collection is even more important in robotics compared to other AI domains.
Data Utilization: It uses Transformer based architecture and other approaches like Diffusion based Policy to train the collected data.
To make the Imitation learning more efficient Zipeng and his collaborators from Stanford created Mobile ALOHA - bimanual manipulation set up that helps with data collection via teleoperator and it also has a data learning pipeline that allows robots to learn from human demonstration.
In case you are wondering how many times humans should show a robot how to do things – it takes around 50 demonstrations to train each task. When it comes to how to utilize the collected data, the research team used the Transformer based architecture. The encoder of the model takes RGB images and decodes a short sequence of actions. They also tried a Diffusion based policy which showed good results.
The researchers have found out that co-training the collected data with previously built up static ALOHA datasets increases performance on manipulation tasks up to 90%.
As a result, robots can autonomously perform certain tasks for example cook, call an elevator, wipe spilled liquid or give high fives to humans.
Watch the demo and the full lecture with more details on our YouTube channel.