How Trump and Harris would shape the future of AI
Comparing their stances ahead of the 2024 election

The United States has been a leader in tech advancements from the moon landing in the 1960s to present day. As we near the 2024 presidential election (November will come around quick!), the president’s stances on tech will determine the global state of AI in many ways. This week’s newsletter explores Donald Trump and Kamala Harris’ positions.
Trump and Vance are all about deregulation
Deregulation: Former President Donald Trump and VP pick J.D. Vance want minimal regulation on AI. Promising to scrap Joe Biden's executive order on AI risks, their approach aims to boost US competitiveness against China. Trump and Vance believe less regulation will foster innovation, but this could backfire, increasing risks associated with unchecked AI development.
"AI might be the most dangerous thing out there," Trump said in February. He did a 360 last month after using AI for the first time to write a speech. "I’m going to use this. I've never seen anything like it."
National security: Trump's vision includes integrating AI-driven defence systems into national security strategies, with a potential "Manhattan Project" that uses AI. While details are scarce, the Trump campaign says they’ll leverage AI for military superiority.
Energy and economic growth: Trump believes AI’s demand for energy will create jobs and stimulate the economy. But, as I wrote last month, AI sucking up tons of water and energy and draining the world of its resources is probably a bad thing.
"We’re spending trillions of dollars on artificial, weak energy that’s not going to fire up our plants,” Trump said on the All-In podcast. “I realized the other day, more than anytime when we were … talking to a lot of geniuses from Silicon Valley and other places — they need electricity at levels that nobody’s ever experienced before, to be successful, to be a leader in AI."
Ethical considerations: Vance proposes an independent ethics board to oversee AI globally, promoting responsible development. But his promise to stop regulation of AI and his scrutiny of big tech reveals a complex and contradictory stance, leaving experts confused at how that’ll work.
Kamala Harris’ focus on ethical AI
Ethical AI and regulation: As the leading figure in AI policy within the White House, Harris has been an advocate for breaking down the bias that much of AI is built on, calling for leaders of civil rights groups to inform AI discussions. This approach acknowledges the disproportionate impact of AI biases on marginalized communities.
Harris advocates for regulations to protect individuals from AI-related harms, including job displacement, and she calls for fair and accountable AI systems.
Environmental considerations: Harris' vision for AI addresses environmental concerns. She advocates for sustainable AI development to minimize the environmental impact of AI technologies. Her focus on reducing energy consumption and mitigating environmental damage aligns with broader sustainability goals.
Global standards and safety guidelines: In November 2023, Harris unveiled "Safe, Secure & Responsible" AI guidelines for federal agencies, emphasizing responsible AI innovation and risk management. At the UK's AI Safety Summit, she highlighted real-world concerns like deepfake abuse and biased AI in law enforcement, calling for action on AI ethics and accountability.
Support from Silicon Valley: Harris's relationship with Silicon Valley donors demonstrates her ability to balance tough regulatory stances with fostering innovation. While cautious, her wealthy tech industry supporters see her as a potential reset between the industry and the Democratic Party.
The next US president's stance on AI will profoundly impact the global landscape. Donald Trump and J.D. Vance's deregulation-focused approach promises rapid innovation but raises significant ethical and environmental concerns. Kamala Harris' measured and ethical approach aims to ensure that AI development benefits society, addresses potential risks and environmental impacts, and is better informed.
Thanks for reading! If you have thoughts about this week’s newsletter that you’d like to share, hit me up on LinkedIn!
Weekly disruptions
Video game performers will go on strike over artificial intelligence concerns (AP) Hollywood's video game performers are going on strike due to unresolved concerns over artificial intelligence protections, marking the second such strike under the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). Talks with major gaming studios like Activision and Warner Bros. broke down over AI regulations, particularly regarding the definition of a "performer" and the use of generative AI.
OpenAI’s SearchGPT challenges Google's dominance (Human+AI) OpenAI unveiled SearchGPT, a prototype AI-powered search engine leveraging GPT-4 to deliver concise, sourced answers and support follow-up queries. Initially available to 10,000 test users, this development could disrupt Google's search dominance and raise important questions about data privacy, traditional SEO, and content creation.
Who will control the future of AI? (Washington Post) (Paywall) Sam Altman, CEO of OpenAI, emphasizes the need for a US-led global coalition to advance AI, proposing four key actions: robust security, infrastructure investment, coherent commercial diplomacy, and new AI governance models. Altman's strategy aims to maintain the US lead in AI development and counter authoritarian efforts to dominate the technology.