Foxes in the Henhouse


DALL-E 3 - A fox entering a henhouse, its gaze fixed on oblivious hens that are engrossed in a luminous vision of a better world.

This past year has seen a lot of hype around AI and the perceived dangers. Much of the fear is being driven by the unknown. People who know better than most have propagated some of the fear. I’ve been sharing my concerns for a while, but a couple of recent articles (How a billionaire-backed network of AI advisers took over Washington and Snoop Dogg, sentient AI and the ‘Arrival Mind Paradox’) have compelled me to offer some related context. Also, both those articles are worth reading!

When I joined Microsoft in 2012, AI and the coming revolution of intelligent agents were all the rage. Cortana had yet to be released, but the promise of AI-powered agents was clear as day. Our team, under the leadership of Blaise Agüera y Arcas, was central to this mission. The problem was that AI wasn’t good enough yet. AI was still primarily task-specific, rule-based, and built around narrowly focused machine learning models. Many of our grand visions fell short.

Given the palpable sense of AI’s impending impact, I frequently found myself engaged in discussions about its ethical deployment. A compelling point of debate centered on who possesses the “right”—or the ethical integrity—to develop this potent technology. We were particularly focused on AI’s role as a go-between or personal agent in our interactions with the broader world. The prevailing sentiment was that, absent robust ethics and regulation, AI’s role—and the dynamics between such agents—would devolve into a competitive arms race.

At Microsoft, we believed we were uniquely positioned to serve as responsible stewards of AI. Unlike many other tech giants, we, along with Apple, didn’t monetize user data. This distinction is critical: when AI is used to monetize attention and influence, manipulation can easily become a central business objective. Such a path could slide us into a dystopian scenario where human agency is severely compromised.

Post-Microsoft, I co-founded Formation, an AI-driven loyalty platform leveraging personalization and gamification. We powered the loyalty programs for Starbucks, United Airlines, and other global brands. To our delight, our product drove an average of a 300% increase in incremental spend/engagement. The combination of clever reward and nudging mechanics, plus pastpurchase behavior unlocked unprecedented engagement. And we were only getting started with the power of AI and personalized experiences.

Seeing the power of AI, and how it could easily be abused, I began to amplify my concerns about its potential pitfalls. In 2019, I joined a panel on AI ethics at an AILA event. Though I felt the other panelists had deeper qualifications, I had a distinct message to convey: a cautionary note on letting business objectives dictate AI success criteria without moral guardrails. My experiences at Microsoft and my observations at my startup underscored this. Given the chance, many business leaders will prioritize short-term gains in customer value extraction, even at the risk of losing the customer.

I believe the real, near term danger ahead is the potency of AI as a behavioral driver, and the power it will almost certainly wield globally. Right now, as a global society, we’re deciding who crafts the rules and who falls under their purview. Unfortunately, lobbyists from the tech giants are pushing sci-fi fears of AI sentience as a distraction to avoid the important actions. This is because the regulations we really need would bring rapid progress to a standstill. They argue China will "win" if we slow down. But this is a global concern. We need global regulation.

My key concerns not being addressed are:

  1. Source and Intellectual Property Attribution: If more than 1% (maybe less) of content generated originates from a single source—be it an author, artist, etc.—there should be attribution and royalties.
  2. Digital Identity: All AI-generated content and AI personas should be required to carry a unique digital “fingerprint” for easy detection and sourcing.
  3. Anti-Manipulation Laws: Clear guidelines should prohibit the manipulation or coercion of both humans and other AI agents by AI services or agents.
  4. Eco-Footprint Disclosure: Every AI service should be required to reveal its environmental impact (it's estimated AI will draw more than 20% of global power consumption by 2030).

We’ve got foxes in the henhouse, and they’re biding their time. While we might overlook them in the short term, distracted by the allure of hyper-personalized AI companions and games, these foxes will seize their opportunity come winter.


Posted
Tagged
Share