Human+AI spotlight: Fei-Fei Li
A leader in artificial intelligence and champion for human-centred technology
Fei-Fei Li is a name synonymous with innovation and progress in AI. As a professor of computer science at Stanford University and co-director of the Stanford Human-Centered AI Institute, Li's work is shaping the future of how machines interact with the world.
Li's work is critical for the responsible development and deployment of AI technology. From her foundational work in computer vision with ImageNet to her advocacy for human-centred AI and inclusivity in the field, Li is a driving force in shaping technology for the greater good.
Revolutionizing computer vision with ImageNet
Li's groundbreaking contribution to AI came with the creation of ImageNet in 2007. This massive image database, labeled with millions of entries, became a game-changer for computer vision – the technology that allows machines to "see" and understand visual data. ImageNet provided researchers with the fuel they needed to develop powerful AI systems capable of facial recognition, self-driving cars, and a multitude of other applications.
Beyond tech: Ethics and diversity in AI
Recognizing the potential for bias in AI algorithms, Li is a vocal advocate for ethical considerations in AI development. She emphasizes the importance of using diverse and unbiased datasets to train AI systems, so they reflect the richness of the real world. Her vision extends beyond just technical expertise, calling for insights from humanities and social sciences to create a more well-rounded approach to AI development.
A collaborative future: Human-centred AI
Li's leadership at the Stanford Human-Centered AI Institute reflects her belief that AI should augment human capabilities, not replace them. Her vision is for AI to be a collaborative tool, helping us solve complex problems and improve our lives.
Li, like many, recognizes the potential dangers alongside the promise of AI. Aware of the historical anxieties surrounding technological advancements, she emphasizes the need for responsible development and deployment. Her advocacy for human-centred AI and diverse datasets addresses these concerns.
Championing inclusion in AI
Li's commitment to responsible AI extends to ensuring inclusivity in the field itself. She co-founded AI4ALL, a US-based nonprofit dedicated to increasing diversity and representation in AI research and development. This focus on inclusivity is crucial for ensuring AI systems that reflect the complexities of the real world.
Bridging the gap between academia and industry
Li recognizes the growing gap in resources between academic and industry AI research. The vast computational power required to train the most powerful AI systems can be a significant hurdle for academics. She has called for a "moonshot mentality" with ambitious government investment to ensure AI serves the public good. Her efforts, including advocating for the National Artificial Intelligence Research Resource (NAIRR), appear to be paying off. The recent introduction of a bill to establish NAIRR demonstrates a positive step towards ensuring researchers have the resources needed to develop AI safely.
Weekly disruptions
A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies (Techcrunch) “Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.”
In a strange twist, a real photo just won an AI photo contest (Android Authority) “Photographer Miles Astray submitted a photo to the AI category of the prestigious 1839 Awards. His piece — titled “F L A M I N G O N E” — ended up taking home the Bronze award in the judge’s category and winning the People’s Vote Award.”
‘Eno’ remixes the music doc — and Brian Eno’s entire career (Rolling Stone) “You can never step in the same river twice. And, unless you are blessed with an infinite amount of patience, time, and mortality, you can never see the same version of the Sundance documentary Eno twice. This is by design.”
Why G7 leaders are turning to a special guest — Pope Francis — for advice on AI (NPR) “When leaders of the world's leading industrialized nations meet in Italy this week, they'll be joined by a unique guest to talk about the risks posed by artificial intelligence: Pope Francis.”
First came ‘spam.’ Now, with AI, we’ve got ‘slop’ (New York Times) “Google suggesting that you could add nontoxic glue to make cheese stick to a pizza? That’s slop. So is a low-price digital book that seems like the one you were looking for, but not quite. And those posts in your Facebook feed that seemingly came from nowhere? They’re slop as well.”