This breakdown focuses on what is discussed and how the evidence is framed, not on technical implementation, policy endorsement, or investment advice.
This episode comes from London Business School’s YouTube channel, where moderator Tim Smith hosts a panel with Janice Yanu, Laura Fernandez, and Dr. Ahmed Shi on AI and sustainability. The tension running through the conversation is simple: AI is increasingly positioned as a solution for climate and impact work, but the technology’s own infrastructure demands and ethical risks can undermine that promise if governance and data quality aren’t treated as first-order priorities.
Key Takeaways
- AI’s physical footprint is expanding at a staggering rate. Global data storage is projected to grow from roughly 5 zettabytes pre‑pandemic to 500 zettabytes by 2030, dramatically increasing environmental and infrastructure demands.
- Ethical AI is a socio‑technical challenge, not a coding issue. Responsible deployment requires coordinated governance across people, processes, policy, and technical controls not just better models.
- Data quality is the core sustainability constraint. AI systems are only as reliable as the data they are trained on, making data integrity the true “fuel” behind ethical and effective AI.
- Sustainability extends beyond carbon. AI’s rollout impacts inequality, economic resilience, and the digital divide with risks that compound if left unmanaged.
- Ethical guardrails are now the top organisational concern. Public and corporate anxiety centres on training data bias and who AI services are ultimately sold to.
The Newsdesk Lead
Moderator Tim Smith hosts a London Business School panel featuring Janice Yanu, Laura Fernandez, and Dr. Ahmed Shi to examine AI through a sustainability and governance lens. Rather than focusing on AI as a software innovation, the discussion reframes it as a large‑scale physical and social system. The panel’s core position is that AI’s promise for sustainable impact depends entirely on rigorous governance and intentional efforts to avoid deepening global inequality.
Deep Dive
The panel breaks AI down into three interdependent layers: Data, Algorithms, and Infrastructure. While public debate often fixates on algorithms, the infrastructure layer carries the heaviest environmental cost. This includes hyperscale data centres, subsea cables, terrestrial fibre networks, and satellite systems all energy‑intensive and expanding rapidly.
To contextualise scale, one zettabyte equals roughly one trillion gigabytes or about 1,000 trillion books. With projections reaching 500 zettabytes by 2030, the digital footprint of AI becomes impossible to ignore.
Ethical concerns concentrate around large language models and the biases embedded in their training data. Live polling during the session highlights data reliability and ethical usage as parallel, top‑ranked challenges for organisations already deploying AI. The panel argues that these issues cannot be solved reactively or in isolation.
Instead, they advocate for a socio‑technical framework aligning governance structures, organisational culture, and technical safeguards so AI systems reflect human values rather than undermine them.
Beyond emissions and energy use, AI is framed as a tool that can either strengthen or erode economic resilience and social cohesion. Without deliberate strategy, AI risks widening the digital divide and amplifying systemic inequality. Sustainable AI, the panel concludes, requires proactive design choices that prioritise transparency, inclusion, and long‑term community impact.
“AI or quality data is critical because it is the fuel… and that’s why it’s a challenge that is bigger than the technical challenge; it’s a socio‑technical challenge that needs to be approached comprehensively across governance, processes, people, and guardrails.”
Why This Episode Matters
This discussion reframes sustainability in AI from a future concern to a present‑day governance test. As organisations race to deploy increasingly powerful systems, the absence of ethical and infrastructural oversight becomes a strategic liability not just a moral one.
Audience response shows recognition rather than surprise many viewers appear to have sensed the physical and ethical costs of AI long before seeing them articulated this clearly.
What Viewers Are Saying
- @panelinsights: “The zettabyte comparison finally made the infrastructure cost of AI feel real.”
- @datagovernancewatch: “Refreshing to hear ethics discussed as an operational system, not a PR add‑on.”
- @sustaintech: “This should be required viewing for anyone pitching ‘green AI’ solutions.”
Worth Watching If / Skip If
Worth Watching If…
✅ You want a grounded explanation of AI’s physical infrastructure and why scale matters.
✅ You’re responsible for governance, risk, or sustainability frameworks inside an organisation.
⏭️ Skip If:
A high‑level summary of the zettabyte growth projection and the Data–Algorithms–Infrastructure framework already gives you sufficient strategic context.
🎥 WATCH THE FULL EPISODE ON YOUTUBE
About the Creator
London Business School is a global business education institution producing research‑driven discussions on leadership, economics, technology, and global impact.
This panel features sustainability and AI experts Janice Yanu, Laura Fernandez, and Dr. Ahmed Shi.
Video Intelligence
- Platform: YouTube
- Views: 2,217
- Runtime: 1 hour 13 minutes
- Upload Date: 2025
This article is part of Creator Daily’s Business Desk, where we examine how creators frame strategy, incentives, and long-term thinking.