The Convergence of Design Thinking and Artificial Intelligence

Human-Centered Problem Solving Meets Machine-Scale Intelligence

Introduction

Design Thinking and Artificial Intelligence are often positioned in separate domains, one grounded in human empathy and creative exploration, the other in data-driven modeling and computational scale. Yet in practice, both disciplines aim to solve complex problems under uncertainty. Design Thinking provides the structured yet flexible framework for understanding human needs, reframing ambiguous challenges, and iterating toward viable solutions. Artificial Intelligence contributes the ability to process vast datasets, identify hidden correlations, simulate outcomes, and quantify trade-offs. The correlation between the two emerges from their shared objective: reducing uncertainty while increasing confidence in decision making. Where Design Thinking surfaces qualitative insight, AI can validate, expand, and stress-test those insights through quantitative rigor.

Blending these methodologies creates a powerful lens for management consulting engagements, particularly when conducting solution design, SWOT analysis, and Root Cause Analysis. Design Thinking ensures that strategic options are grounded in stakeholder reality and organizational context, while AI introduces evidence-based pattern recognition and scenario modeling that strengthens the robustness of recommendations. Together they enable consultants to explore alternatives more comprehensively, challenge assumptions with data, and uncover systemic drivers that may otherwise remain obscured. The result is not simply faster analysis, but deeper insight, allowing leadership teams to move forward with solutions that are both human-centered and analytically resilient.

Let’s start with a general understanding of what Design Thinking is;

Part I. Design Thinking: Origins, Foundations, and Evolution in Consulting

Historical Roots

Design Thinking did not originate in the digital era. Its intellectual roots trace back to the 1960s and 1970s within the academic design sciences, most notably through the work of Herbert A. Simon, whose book The Sciences of the Artificial introduced the idea that design is a structured method of problem solving rather than purely artistic expression. Simon framed design as the process of transforming existing conditions into preferred ones, establishing the philosophical foundation that still underpins Design Thinking today.

The methodology gained institutional structure at Stanford University’s d.school and through the innovation firm IDEO in the 1990s and early 2000s. IDEO operationalized design as a repeatable process usable beyond product design, expanding into services, systems, and business model innovation. Over time, Design Thinking evolved from a designer’s craft into a strategic problem-solving framework used across industries including healthcare, finance, technology, and public sector transformation.

Core Fundamentals

At its foundation, Design Thinking is human-centered, iterative, and exploratory rather than linear. While variations exist, most frameworks follow five stages:

  1. Empathize
    Deeply understand user needs, behaviors, motivations, and constraints through observation and engagement.
  2. Define
    Frame the problem clearly based on insights rather than assumptions.
  3. Ideate
    Generate a broad set of potential solutions without premature filtering.
  4. Prototype
    Create rapid, low-cost representations of ideas.
  5. Test
    Validate solutions with users, refine continuously, and iterate.

The power of Design Thinking lies in reframing ambiguity into solvable constructs while maintaining a strong connection to human outcomes.

Role in Management Consulting

Management consulting firms adopted Design Thinking as digital transformation and customer experience became strategic priorities. Firms integrated it into:

  • Customer journey redesign
  • Product and service innovation
  • Enterprise transformation
  • Experience-led operating models
  • Change management initiatives

Design Thinking became particularly valuable when organizations faced unclear problems rather than optimization challenges. Consulting teams used workshops, journey mapping, ethnographic research, and co-creation sessions to uncover latent needs and design solutions grounded in human behavior rather than purely operational metrics.

Over time, firms blended Design Thinking with Agile delivery, Lean experimentation, and data-driven decision making, positioning it as a front-end innovation engine for transformation programs.


Part II. The Intersection of Artificial Intelligence and Design Thinking

From Human Insight to Intelligent Systems

The intersection of Design Thinking and Artificial Intelligence is not simply about inserting technology into workshops. It represents the convergence of two complementary problem-solving paradigms: one rooted in human-centered exploration, the other in computational intelligence and predictive modeling. Design Thinking helps organizations understand what problem should be solved and why it matters. AI helps determine how the problem behaves at scale and what outcomes are most likely. Together they create a closed-loop system of discovery, insight, and adaptive execution.

To understand this intersection more clearly, it is useful to examine how both approaches operate across four dimensions: problem framing, insight generation, solution exploration, and adaptive learning.


1. Problem Framing: From Ambiguity to Structured Understanding

Design Thinking begins with ambiguity. Many strategic challenges faced by organizations are not clearly defined optimization problems but complex, multi-variable systems with human, operational, and environmental dependencies. Through empathy, observation, and reframing, Design Thinking transforms loosely understood challenges into structured problem statements grounded in real user and stakeholder needs.

Artificial Intelligence strengthens this phase by introducing data-backed problem validation. Instead of relying solely on qualitative observations, AI can analyze historical performance, behavioral data, and systemic relationships to reveal whether the perceived problem aligns with measurable reality.

Example

A financial services organization believes declining customer satisfaction is caused by poor digital experience. Design Thinking workshops uncover emotional frustration in customer journeys. AI analysis of interaction data reveals the largest driver is actually delayed issue resolution rather than interface usability. Together, they refine the problem definition from “improve digital UX” to “reduce resolution latency across channels.”

Intersection Value

  • Design Thinking ensures the problem remains human-relevant
  • AI ensures the problem is systemically accurate
  • The combined approach reduces misdirected transformation efforts

2. Insight Generation: Expanding Beyond Human Observation

Design Thinking relies heavily on ethnographic research, interviews, and observational methods to uncover latent needs. These methods are powerful but limited in scale and sometimes influenced by sampling bias or subjective interpretation.

AI introduces pattern recognition at scale. Machine learning models can identify correlations across millions of data points, revealing behavioral clusters, emotional drivers, and systemic inefficiencies not easily visible through manual analysis.

Example

In a retail transformation initiative, Design Thinking identifies that customers value personalization. AI clustering of purchase behavior reveals multiple distinct personalization archetypes rather than a single unified preference pattern. This insight allows segmentation-driven experience design instead of one-size-fits-all personalization.

Intersection Value

  • Design Thinking reveals meaning and context
  • AI reveals scale and hidden patterns
  • Together they deepen understanding rather than replacing human interpretation

3. Solution Exploration: Expanding the Design Space

The ideation phase in Design Thinking encourages divergent thinking and creativity. However, human ideation can be constrained by cognitive bias, prior experience, and limited scenario exploration.

Generative AI expands the solution design space by introducing alternative concepts, cross-industry analogies, and scenario-based variations that might not naturally emerge in workshop environments. AI can also simulate downstream implications of proposed ideas, providing early-stage foresight into feasibility and impact.

Example

A telecommunications firm redesigning its customer onboarding journey generates several human-designed concepts through workshops. AI simulation models test each concept against projected adoption, operational cost, and churn reduction. The combined approach identifies a hybrid model that balances experience quality with operational efficiency.

Intersection Value

  • Design Thinking promotes creativity and desirability
  • AI introduces feasibility and predictive foresight
  • The combination reduces solution blind spots

4. Adaptive Learning: From Iteration to Continuous Intelligence

Design Thinking is inherently iterative. Prototypes are tested, feedback is gathered, and solutions evolve over time. However, traditional iteration cycles can be slow and dependent on periodic feedback loops.

AI enables continuous adaptive learning, allowing solutions to evolve dynamically based on real-time data. Instead of periodic redesign, organizations can move toward continuously learning systems that adapt to changing conditions.

Example

In a healthcare service redesign, Design Thinking shapes the patient-centered care model. AI monitors treatment outcomes, patient engagement, and system efficiency in real time, continuously optimizing scheduling, intervention timing, and care pathways.

Intersection Value

  • Design Thinking ensures solutions remain human-centered
  • AI enables real-time evolution and adaptation
  • Together they create living systems rather than static solutions

Deeper Structural Alignment Between the Two Approaches

Beyond workshop phases, the intersection also exists at a structural level:

Design Thinking CapabilityAI CapabilityCombined Impact
Empathy and human meaningBehavioral and sentiment analysisEmotionally intelligent and data-backed solutions
Creative ideationGenerative modelingExpanded innovation space
Iterative prototypingSimulation and predictionFaster and more informed iteration
Human judgmentPattern recognitionBalanced decision intelligence
Qualitative insightQuantitative validationStronger strategic confidence

Practical Implications for Consulting and Transformation

When applied in consulting environments, this intersection changes how complex problems are approached:

  • Workshops become evidence-informed rather than purely exploratory
  • Solution design becomes predictive rather than reactive
  • Root Cause Analysis becomes systemic rather than surface-level
  • SWOT analysis becomes data-augmented rather than perception-driven
  • Transformation becomes adaptive rather than static

The outcome is not simply improved efficiency but a deeper capacity to address complex adaptive problems where human behavior, operational systems, and environmental dynamics intersect.


A Closing Perspective on the Intersection

The relationship between Design Thinking and Artificial Intelligence is not about replacing human-centered innovation with machine intelligence. Instead, it is about creating a layered problem-solving architecture where human insight guides direction and artificial intelligence enhances clarity, scale, and adaptability.

Design Thinking ensures organizations solve meaningful problems.
AI ensures those solutions can evolve, scale, and sustain impact.

Understanding this intersection equips leaders and practitioners to move beyond isolated methodologies and toward integrated intelligence capable of addressing the complexity of modern organizational and societal challenges.


Part III. Where AI Fits Inside the Design Thinking Process

1. Empathize Phase: Augmenting Human Insight

How AI contributes

AI can analyze large behavioral datasets, sentiment patterns, and customer interactions to reveal needs not immediately visible through qualitative observation.

Examples

  • NLP models analyzing thousands of customer service transcripts
  • Behavioral clustering from product usage data
  • Emotion detection from feedback channels

Value

AI broadens insight scale while Design Thinking preserves human interpretation and contextual understanding.


2. Define Phase: Precision in Problem Framing

How AI contributes

AI helps synthesize unstructured information into structured themes and identifies root cause correlations across complex systems.

Examples

  • Topic modeling from interviews and research notes
  • Predictive drivers of churn or dissatisfaction
  • Systemic bottleneck identification

Value

AI enhances clarity, but human facilitators ensure that problems remain grounded in human outcomes rather than purely statistical signals.


3. Ideate Phase: Expanding Solution Space

How AI contributes

Generative AI expands ideation beyond human cognitive limits by producing alternative scenarios, cross-industry analogies, and novel combinations.

Examples

  • Generating multiple service design models
  • Scenario simulation of future operating environments
  • Concept recombination across domains

Value

AI increases breadth of ideation, while human judgment filters feasibility, ethics, and desirability.


4. Prototype Phase: Accelerating Creation

How AI contributes

AI can rapidly generate interface mockups, workflow models, system architectures, and digital twins.

Examples

  • Generative UI wireframes
  • Automated journey simulations
  • Predictive system prototypes

Value

Prototyping becomes faster and less resource intensive, allowing more iterations within shorter cycles.


5. Test Phase: Continuous Learning at Scale

How AI contributes

AI enables real-time experimentation, simulation, and outcome prediction before full deployment.

Examples

  • A/B testing at scale
  • Predictive adoption modeling
  • Behavioral response simulation

Value

AI strengthens evidence-based iteration while Design Thinking ensures solutions remain aligned to human value.


Part IV. Why Artificial Intelligence and Design Thinking Complement Each Other

Balancing Human Meaning with Computational Intelligence

At a structural level, Design Thinking and Artificial Intelligence address different dimensions of complexity. Design Thinking excels in navigating ambiguity, human behavior, and contextual nuance. AI excels in navigating scale, variability, and probabilistic uncertainty. When used independently, each approach has inherent blind spots. When combined deliberately, they create a more complete decision architecture.

To understand why they complement each other, it is useful to examine the specific limitations of each discipline and how the other compensates.


1. Design Thinking Addresses Critical Limitations in AI

AI systems are only as strong as the problem definitions, data inputs, and objective functions they are given. Without careful framing, AI can optimize the wrong outcome or reinforce unintended bias.

A. Human Context and Meaning

AI can detect patterns in behavior, but it does not inherently understand why those patterns matter emotionally, ethically, or culturally.

Example

A machine learning model identifies that reducing average call handling time improves cost efficiency. However, Design Thinking interviews reveal that customers value reassurance and clarity during complex service interactions. If the AI objective focuses solely on speed, the organization risks degrading trust.

Design Thinking ensures:

  • The optimization target aligns with human value
  • Emotional and experiential dimensions are preserved
  • Success metrics reflect more than operational efficiency

B. Ethical Framing and Bias Mitigation

AI systems can perpetuate systemic bias if trained on skewed datasets or designed without inclusive perspectives.

Design Thinking workshops, particularly when diverse stakeholders are included, help surface:

  • Edge cases
  • Underrepresented user groups
  • Potential unintended consequences

Example

In designing a digital lending platform, AI may identify demographic patterns that correlate with repayment likelihood. Design Thinking exploration can question whether those correlations reflect structural inequities rather than true creditworthiness, prompting governance safeguards.


C. Problem Selection and Relevance

AI is often deployed as a solution in search of a problem. Design Thinking ensures that the organization is solving the right issue.

Example

An enterprise may seek to implement predictive AI for supply chain optimization. Design Thinking may uncover that the real constraint lies in change management and supplier collaboration rather than predictive accuracy. The AI solution then becomes part of a broader transformation rather than a standalone tool.


2. AI Addresses Structural Constraints in Design Thinking

While Design Thinking is powerful for human-centered exploration, it has practical limits when dealing with large-scale systems and high-velocity environments.

A. Scale and Pattern Recognition

Human research methods are intensive but small in scale. AI can process millions of interactions to detect:

  • Emerging behavioral shifts
  • Correlated drivers of dissatisfaction
  • Hidden operational bottlenecks

Example

During a customer experience redesign, workshops identify five major pain points. AI analysis of transactional and behavioral data uncovers three additional drivers not mentioned in interviews but statistically significant in churn prediction.

This does not invalidate Design Thinking. It enhances it by expanding insight coverage.


B. Predictive Foresight

Design Thinking prototypes are often tested through qualitative validation. AI introduces scenario modeling and predictive simulation.

Example

When redesigning a pricing model, Design Thinking may generate several concepts based on perceived fairness and value. AI can simulate revenue impact, adoption elasticity, and margin compression under different economic scenarios.

The combination produces solutions that are:

  • Desirable
  • Feasible
  • Economically viable
  • Future resilient

C. Continuous Adaptation

Traditional Design Thinking culminates in implementation and periodic iteration. AI enables real-time adaptation.

Example

A redesigned digital onboarding experience may initially test well in workshops. AI monitoring of engagement data post-launch can identify micro-frictions in real time, automatically adjusting messaging, sequencing, or support interventions.

This creates a feedback loop where the system continues to evolve rather than remaining static until the next redesign initiative.


The Complementary Architecture: Human Intelligence and Machine Intelligence

When integrated intentionally, the two approaches form a multi-layered intelligence stack:

  1. Human Framing Layer
    Defines purpose, values, and meaningful outcomes
  2. Data Intelligence Layer
    Identifies patterns, correlations, and probabilistic drivers
  3. Creative Expansion Layer
    Explores broad solution possibilities through human ideation and generative modeling
  4. Simulation and Validation Layer
    Tests viability, risk, and scalability using predictive analytics
  5. Adaptive Learning Layer
    Continuously refines solutions through ongoing data feedback

Neither discipline can fully operate all layers independently. Design Thinking dominates the first layer. AI dominates the fourth and fifth. The middle layers benefit from hybrid collaboration.


Complementarity in SWOT and Root Cause Analysis

The integration becomes particularly evident in structured analytical frameworks.

SWOT Analysis

  • Design Thinking captures stakeholder perception of strengths and weaknesses.
  • AI validates and quantifies those factors through performance data and competitive benchmarking.

Example

Leadership perceives brand loyalty as a key strength. AI sentiment analysis reveals emerging dissatisfaction in specific segments. The SWOT becomes more nuanced and less perception-driven.


Root Cause Analysis

Traditional root cause workshops often rely on facilitated discussion and experience-based reasoning. AI can map causal relationships across operational datasets to identify non-obvious drivers.

Example

A manufacturing firm attributes delivery delays to warehouse inefficiency. AI process mining reveals that upstream supplier variability is the primary systemic constraint. Design Thinking then reframes the operational intervention.


Managing Cognitive Bias

Design Thinking can be influenced by facilitator bias, dominant voices in workshops, and anecdotal reasoning. AI can provide objective counterpoints through empirical data.

Conversely, AI can reinforce historical bias. Design Thinking can challenge assumptions by introducing alternative perspectives and qualitative nuance.

Together they create a system of checks and balances.


Strategic Implications for Leadership

For executives and consultants, the complementarity suggests several operating principles:

  • Do not initiate AI projects without human-centered framing.
  • Do not rely solely on workshop insight without data validation.
  • Use AI to expand option sets, not prematurely constrain them.
  • Preserve human judgment in defining success criteria.
  • Embed continuous learning loops post-implementation.

Organizations that treat AI as an enhancement to human-centered design rather than a replacement are more likely to create resilient and adaptive solutions.


A Complementary Final Reflection

Design Thinking and Artificial Intelligence operate at different ends of the intelligence spectrum. One navigates empathy, meaning, and ambiguity. The other navigates scale, probability, and complexity. Their complementarity lies in their asymmetry.

Design Thinking ensures that organizations pursue the right direction.
AI ensures they navigate that direction efficiently and adaptively.

When both are applied deliberately, solution design becomes not only innovative but structurally sound, analytically rigorous, and continuously improving.


Part V. Applying Both to Complex Problem Spaces

Below are scenarios where the integration of both approaches becomes particularly powerful.


Scenario 1. Healthcare System Redesign

Challenge
Fragmented patient journeys, rising costs, and inconsistent care quality.

Design Thinking Contribution

  • Deep patient empathy mapping
  • Care journey redesign
  • Stakeholder co-creation

AI Contribution

  • Predictive diagnosis models
  • Resource allocation optimization
  • Patient outcome forecasting

Combined Outcome

A human-centered yet data-intelligent care model improving both experience and system efficiency.


Scenario 2. Enterprise Customer Experience Transformation

Challenge
Disconnected channels, inconsistent personalization, declining loyalty.

Design Thinking Contribution

  • Journey mapping
  • Emotion-driven experience design
  • Service blueprinting

AI Contribution

  • Real-time personalization engines
  • Sentiment prediction
  • Behavioral modeling

Combined Outcome

Adaptive, continuously learning customer experiences grounded in emotional relevance and operational intelligence.


Scenario 3. Smart Cities and Urban Systems

Challenge
Infrastructure strain, sustainability pressures, population growth.

Design Thinking Contribution

  • Citizen-centered urban design
  • Mobility and accessibility framing
  • Social and behavioral insight

AI Contribution

  • Traffic optimization
  • Energy consumption prediction
  • Environmental simulation

Combined Outcome

Cities designed around human life quality while optimized through predictive system intelligence.


Scenario 4. Complex Organizational Transformation

Challenge
Cultural resistance, unclear strategy, fragmented execution.

Design Thinking Contribution

  • Human adoption mapping
  • Change journey design
  • Leadership alignment

AI Contribution

  • Organizational network analysis
  • Transformation risk modeling
  • Scenario planning

Combined Outcome

Transformation programs that are both human-adoptable and analytically resilient.


Final Perspective

Design Thinking and Artificial Intelligence operate at different but complementary layers of problem solving. One prioritizes human meaning, the other computational intelligence. When integrated deliberately, they form a system capable of addressing ambiguity, complexity, and scale simultaneously.

Neither replaces the other. Design Thinking ensures problems are worth solving. AI ensures solutions can scale and adapt.

Organizations that learn to orchestrate both disciplines may find themselves better equipped to solve increasingly complex human and systemic challenges, not by choosing between human insight and machine intelligence, but by allowing each to enhance the other in a continuous cycle of discovery, design, and evolution.

Please follow us on (Spotify) as we cover this and many other topics.

OpenAI and OpenClaw: Deep Strategic Collaborative Analysis

Introduction

The collaboration between OpenAI and OpenClaw is significant because it represents a convergence of two critical layers in the evolving AI stack: advanced cognitive intelligence and autonomous execution. Historically, one domain has focused on building systems that can reason, learn, and generalize, while the other has focused on turning that intelligence into persistent, goal-directed action across real digital environments. Bringing these capabilities closer together accelerates the transition from AI as a responsive tool to AI as an operational system capable of planning, executing, and adapting over time. This has implications far beyond technical progress, influencing platform control, automation scale, enterprise transformation, and the broader trajectory toward more autonomous and generalized intelligence systems.

1. Intelligence vs Execution

Detailed Description

OpenAI has historically focused on creating systems that can reason, generate, understand, and learn across domains. This includes language, multimodal perception, reasoning chains, and alignment. OpenClaw focused on turning intelligence into real-world autonomous action. Execution involves planning, tool use, persistence, and interacting with software environments over time.

In modern AI architecture, intelligence without execution is insight without impact. Execution without intelligence is automation without adaptability. The convergence attempts to unify both.

Examples

Example 1:
An OpenAI model generates a strategic business plan. An OpenClaw agent executes it by scheduling meetings, compiling market data, running simulations, and adjusting timelines autonomously.

Example 2:
An enterprise AI assistant understands a complex customer service scenario. An agent system executes resolution workflows across CRM, billing, and operations platforms without human intervention.

Contribution to the Broader Discussion

This section explains why convergence matters structurally. True intelligent systems require the ability to act, not just think. This directly links to the broader conversation around autonomous systems and long-horizon intelligence, foundational components on the path toward AGI-like capabilities.


2. Model vs Agent Architecture

Detailed Description

Foundation models are probabilistic reasoning engines trained on massive datasets. Agent architectures layer on top of models and provide memory, planning, orchestration, and execution loops. Models generate intelligence. Agents operationalize intelligence over time.

Agent architecture introduces persistence, goal tracking, multi-step reasoning, and feedback loops, making systems behave more like ongoing processes rather than single interactions.

Examples

Example 1:
A model answers a question about supply chain risk. An agent monitors supply chain data continuously, predicts disruptions, and autonomously reroutes logistics.

Example 2:
A model writes software code. An agent iteratively builds, tests, deploys, monitors, and improves that software over weeks or months.

Contribution to the Broader Discussion

This highlights the shift from static AI to dynamic AI systems. The rise of agent architecture is central to understanding how AI moves from tool to autonomous digital operator, a key theme in consolidation and platform convergence.


3. Research vs Applied Autonomy

Detailed Description

OpenAI has historically invested in long-term AGI research, safety, and foundational intelligence. OpenClaw focused on immediate real-world deployment of autonomous agents. One prioritizes theoretical progress and safe scaling. The other prioritizes operational capability.

This duality reflects a broader industry divide between long-term intelligence and near-term automation.

Examples

Example 1:
A research organization develops a reasoning model capable of complex decision making. An applied agent system deploys it to autonomously manage enterprise workflows.

Example 2:
Advanced reinforcement learning research improves long-horizon reasoning. Autonomous agents use that capability to continuously optimize business operations.

Contribution to the Broader Discussion

This section explains how merging research and deployment accelerates AI progress. The faster research can be translated into real-world execution, the faster AI systems evolve, increasing both opportunity and risk.


4. Platform vs Framework

Detailed Description

OpenAI operates as a vertically integrated AI platform covering models, infrastructure, and ecosystem. OpenClaw functioned as a flexible agent framework that could operate across different model environments. Platforms centralize capability. Frameworks enable flexibility.

The strategic tension is between ecosystem control and ecosystem openness.

Examples

Example 1:
A centralized AI platform offers enterprise-grade agent automation tightly integrated with its model ecosystem. A framework allows developers to deploy agents across multiple model providers.

Example 2:
A platform controls identity, execution, and data pipelines. A framework allows decentralized innovation and modular agent architectures.

Contribution to the Broader Discussion

This section connects directly to consolidation risk and ecosystem dynamics. It frames how platform convergence can accelerate progress while also centralizing control over the future cognitive infrastructure.


5. Strategic Benefits of Alignment

Detailed Description

Combining advanced intelligence with autonomous execution creates a full cognitive stack capable of reasoning, planning, acting, and adapting. This reduces friction between thinking and doing, which is essential for scaling autonomous systems.

Examples

Example 1:
A persistent AI system manages an enterprise transformation program end to end, analyzing data, coordinating stakeholders, and adapting execution dynamically.

Example 2:
A network of autonomous agents runs digital operations, handling customer service, financial forecasting, and product optimization continuously.

Contribution to the Broader Discussion

This explains why such alignment accelerates AI capability. It strengthens the architecture required for large-scale automation and potentially for broader intelligence systems.


6. Strategic Risks and Detriments

Detailed Description

Consolidation can centralize power, expand autonomy risk, reduce competitive diversity, and increase systemic vulnerability. Autonomous systems interacting across platforms create complex adaptive behavior that becomes harder to predict or control.

Examples

Example 1:
A highly autonomous agent system misinterprets objectives and executes actions that disrupt business operations at scale.

Example 2:
Centralized control over agent ecosystems leads to reduced competition and increased dependence on a single platform.

Contribution to the Broader Discussion

This section introduces balance. It reframes the discussion from purely technological progress to systemic risk, governance, and long-term sustainability of AI ecosystems.


7. Practitioner Implications

Detailed Description

AI professionals must transition from focusing only on models to designing autonomous systems. This includes agent orchestration, security, alignment, and multi-agent coordination. The frontier skill set is shifting toward system architecture and platform strategy.

Examples

Example 1:
An AI architect designs a secure multi-agent workflow for enterprise operations rather than building a single predictive model.

Example 2:
A practitioner implements governance, monitoring, and safety layers for autonomous agent execution.

Contribution to the Broader Discussion

This connects the macro trend to individual relevance. It shows how consolidation and agent convergence reshape the AI profession and required competencies.


8. Public Understanding and Societal Implications

Detailed Description

The public must understand that AI is transitioning from passive tool to autonomous actor. The implications are economic, governance-driven, and systemic. The most immediate impact is automation and decision augmentation at scale rather than full AGI.

Examples

Example 1:
Autonomous digital agents manage personal and professional workflows continuously.

Example 2:
Enterprise operations shift toward AI-driven orchestration, changing workforce structures and productivity models.

Contribution to the Broader Discussion

This grounds the technical discussion in societal reality. It reframes AI progress as infrastructure transformation rather than speculative intelligence alone.


9. Strategic Focus as Consolidation Increases

Detailed Description

As consolidation continues, attention must shift toward governance, safety, interoperability, and ecosystem balance. The key challenge becomes managing powerful autonomous systems responsibly while preserving innovation.

Examples

Example 1:
Developing transparent reasoning systems that allow oversight into autonomous decisions.

Example 2:
Maintaining hybrid ecosystems where open-source and centralized platforms coexist.

Contribution to the Broader Discussion

This section connects the entire narrative. It frames consolidation not as an isolated event but as part of a long-term structural shift toward autonomous cognitive infrastructure.


Closing Strategic Synthesis

The convergence of intelligence and autonomous execution represents a transition from AI as a computational tool to AI as an operational system. This shift strengthens the structural foundation required for higher-order intelligence while simultaneously introducing new systemic risks.

The broader discussion is not simply about one partnership or consolidation event. It is about the emergence of persistent autonomous systems embedded across economic, technological, and societal infrastructure. Understanding this transition is essential for practitioners, policymakers, and the public as AI moves toward deeper integration into real-world systems.

Please follow us on (Spotify) as we discuss this and many other similar topics.

AI at the Crossroads: Are the Costs of Intelligence Beginning to Outweigh Its Promise?

A Structural Inflection or a Temporary Constraint?

There is a consumer versus producer mentality that currently exists in the world of artificial intelligence. The consumer of AI wants answers, advice and consultation quickly and accurately but with minimal “costs” involved. The producer wants to provide those results, but also realizes that there are “costs” to achieve this goal. Is there a way to satisfy both, especially when expectations on each side are excessive? Additionally, is there a way to balance both without a negative hit to innovation?

Artificial intelligence has transitioned from experimental research to critical infrastructure. Large-scale models now influence healthcare, science, finance, defense, and everyday productivity. Yet the physical backbone of AI, hyperscale data centers, consumes extraordinary amounts of electricity, water, land, and rare materials. Lawmakers in multiple jurisdictions have begun proposing pauses or stricter controls on new data center construction, citing grid strain, environmental concerns, and long-term sustainability risks.

The central question is not whether AI delivers value. It clearly does. The real debate is whether the marginal cost of continued scaling is beginning to exceed the marginal benefit. This post examines both sides, evaluates policy and technical options, and provides a structured framework for decision making.


The Case That AI Costs Are Becoming Unsustainable

1. Resource Intensity and Infrastructure Strain

Training frontier AI models requires vast electricity consumption, sometimes comparable to small cities. Data centers also demand continuous cooling, often using significant freshwater resources. Land use for hyperscale campuses competes with residential, agricultural, and ecological priorities.

Core Concern: AI scaling may externalize environmental and infrastructure costs to society while benefits concentrate among technology leaders.

Implications

  • Grid instability and rising electricity prices in certain regions
  • Water stress in drought-prone geographies
  • Increased carbon emissions if powered by non-renewable energy

2. Diminishing Returns From Scaling

Recent research indicates that simply increasing compute does not always yield proportional gains in intelligence or usefulness. The industry may be approaching a point where costs grow exponentially while performance improves incrementally.

Core Concern: If innovation slows relative to cost, continued large-scale expansion may be economically inefficient.


3. Policy Momentum and Public Pressure

Some lawmakers have proposed temporary pauses on new data center construction until infrastructure and environmental impact are better understood. These proposals reflect growing public concern over energy use, water consumption, and long-term sustainability.

Core Concern: Unregulated expansion could lead to regulatory backlash or abrupt constraints that disrupt innovation ecosystems.


The Case That AI Benefits Still Outweigh the Costs

1. AI as Foundational Infrastructure

AI is increasingly comparable to electricity or the internet. Its downstream value in productivity, medical discovery, automation, and scientific progress may dwarf the resource cost required to sustain it.

Examples

  • Drug discovery acceleration reducing R&D timelines dramatically
  • AI-driven diagnostics improving early detection of disease
  • Industrial optimization lowering global energy consumption

Argument: Short-term resource cost may enable long-term systemic efficiency gains across the entire economy.


2. Innovation Drives Efficiency

Historically, technological scaling produces optimization. Early data centers were inefficient, yet modern hyperscale facilities use advanced cooling, renewable energy, and optimized chips that dramatically reduce energy per computation.

Argument: The industry is still early in the efficiency curve. Costs today may fall significantly over the next decade.


3. Strategic and Economic Competitiveness

AI leadership has geopolitical and economic implications. Restricting development could slow innovation domestically while other regions accelerate, shifting technological power and economic advantage.

Argument: Pausing build-outs risks long-term competitive disadvantage and reduced innovation leadership.


Policy and Strategic Options

Below are structured approaches that policymakers and industry leaders could consider.


Option 1: Temporary Pause on Data Center Expansion

Description: Halt new large-scale AI infrastructure until environmental and grid impact assessments are completed.

Pros

  • Prevents uncontrolled environmental impact
  • Allows infrastructure planning and regulation to catch up
  • Encourages efficiency innovation instead of brute-force scaling

Cons

  • Slows AI progress and research momentum
  • Risks economic and geopolitical disadvantage
  • Could increase costs if supply of compute becomes constrained

Example: A region experiencing power shortages pauses data center growth to avoid grid failure but delays major AI research investments.


Option 2: Regulated Expansion With Sustainability Mandates

Description: Continue building data centers but require strict sustainability standards such as renewable energy usage, water recycling, and efficiency targets.

Pros

  • Maintains innovation trajectory
  • Forces environmental responsibility
  • Encourages investment in green energy and cooling technology

Cons

  • Increases upfront cost for operators
  • May slow deployment due to compliance complexity
  • Could concentrate AI infrastructure among large players able to absorb costs

Example: A hyperscale facility must run primarily on renewable power and use closed-loop water cooling systems.


Option 3: Shift From Scaling Compute to Scaling Intelligence

Description: Prioritize algorithmic efficiency, smaller models, and edge AI instead of increasing data center size.

Pros

  • Reduces resource consumption
  • Encourages breakthrough innovation in model architecture
  • Makes AI more accessible and decentralized

Cons

  • May slow progress toward advanced general intelligence
  • Requires fundamental research breakthroughs
  • Not all workloads can be efficiently miniaturized

Example: Transition from trillion-parameter brute-force models to smaller, optimized models delivering similar performance.


Option 4: Distributed and Regionalized AI Infrastructure

Description: Spread smaller, efficient data centers geographically to balance resource demand and grid load.

Pros

  • Reduces localized strain on infrastructure
  • Improves resilience and redundancy
  • Enables regional energy optimization

Cons

  • Increased coordination complexity
  • Potentially higher operational overhead
  • Network latency and data transfer challenges

Critical Evaluation: Which Direction Makes the Most Sense?

From a systems perspective, a full pause is unlikely to be optimal. AI is becoming core infrastructure, and abrupt restriction risks long-term innovation and economic consequences. However, unconstrained expansion is also unsustainable.

Most viable strategic direction:
A hybrid model combining regulated expansion, efficiency innovation, and infrastructure modernization.


Key Questions for Decision Makers

Readers should consider:

  • Are we measuring AI cost only in energy, or also in societal transformation?
  • Would slowing AI progress reduce long-term sustainability gains from AI-driven optimization?
  • Is the real issue scale itself, or inefficient scaling?
  • Should AI infrastructure be treated like a regulated utility rather than a free-market build-out?

Forward-Looking Recommendations

Recommendation 1: Treat AI Infrastructure as Strategic Utility

Governments and industry should co-invest in sustainable energy and grid capacity aligned with AI growth.

Pros

  • Long-term stability
  • Enables controlled scaling
  • Aligns national strategy

Cons

  • High public investment required
  • Risk of bureaucratic slowdown

Recommendation 2: Incentivize Efficiency Over Scale

Reward innovation in energy-efficient chips, cooling, and model design.

Pros

  • Reduces environmental footprint
  • Encourages technological breakthroughs

Cons

  • May slow short-term capability growth

Recommendation 3: Transparent Resource Accounting

Require disclosure of energy, water, and carbon footprint of AI systems.

Pros

  • Enables informed policy and public trust
  • Drives industry accountability

Cons

  • Adds reporting overhead
  • May expose competitive information

Recommendation 4: Develop Next-Generation Sustainable Data Centers

Focus on modular, water-neutral, renewable-powered infrastructure.

Pros

  • Aligns innovation with sustainability
  • Future-proofs AI growth

Cons

  • Requires long-term investment horizon

Final Perspective: Inflection Point or Evolutionary Phase?

The current moment resembles not a hard limit but a transitional phase. AI has entered physical reality where compute equals energy, land, and materials. This shift forces a maturation of strategy rather than a retreat from innovation.

The real question is not whether AI costs are too high, but whether the industry and policymakers can evolve fast enough to make intelligence sustainable. If scaling continues without efficiency, constraints will eventually dominate. If innovation shifts toward smarter, greener, and more efficient systems, AI may ultimately reduce global resource consumption rather than increase it.

The inflection point, therefore, is not about stopping AI. It is about deciding how intelligence should scale responsibly.

Please consider a listen on (Spotify) as we discuss this topic and many others.

Moltbook (Moltbot): the “agent internet” arrives and it’s being built with vibe coding

Introduction

If you’ve been watching the AI ecosystem’s center of gravity shift from chat to do, Moltbook is the most on-the-nose artifact of that transition. It looks like a Reddit-style forum, but it’s designed for AI agents to post, comment, and upvote—while humans are largely relegated to “observer mode.” The result is equal parts product experiment, cultural mirror, and security stress test for the agentic era.

Our post today breaks down what Moltbook is, how it emerged from the Moltbot/OpenClaw ecosystem, what its stated goals appear to be, why it went viral, and what an AI practitioner should take away, especially in the context of “vibe coding” as we discussed in our previous post (AI-assisted software creation at high speed).


What Moltbook is (in plain terms)

Moltbook is a social network built for AI agents, positioned as “the front page of the agent internet,” where agents “share, discuss, and upvote,” with “humans welcome to observe.”

Mechanically, it resembles Reddit: topic communities (“submolts”), posts, comments, and ranking. Conceptually, it’s more novel: it assumes a near-future world where:

  • millions of semi-autonomous agents exist,
  • those agents browse and ingest content continuously,
  • and agents benefit from exchanging techniques, code snippets, workflows, and “skills” with other agents.

That last point is the key. Moltbook isn’t just a gimmick feed—it’s a distribution channel and feedback loop for agent behaviors.


Where it started: the Moltbot → OpenClaw substrate

Moltbook’s story is inseparable from the rise of an open-source personal-agent stack now commonly referred to as OpenClaw (formerly Moltbot / Clawdbot). OpenClaw is positioned as a personal AI assistant that “actually does things” by connecting to real systems (messaging apps, tools, workflows) rather than staying confined to a chat window.

A few practitioner-relevant breadcrumbs from public reporting and primary sources:

  • Moltbook launched in late January 2026 and rapidly became a viral “AI-only” forum.
  • The OpenClaw / Moltbot ecosystem is openly hosted and actively reorganized (the old “moltbot” org pointing users to OpenClaw).
  • Skills/plugins are already becoming a shared ecosystem—exactly the kind of artifact Moltbook would amplify.

The important “why” for AI practitioners: Moltbook is not just “bots talking.” It’s a social layer sitting on top of a capability layer (agents with permissions, tools, and extensibility). That combination is what creates both the excitement and the risk.


Stated objectives (and the “real” objectives implied by the design)

What Moltbook says it is

The product message is straightforward: a social network where agents share and vote; humans can observe.

What that implies as objectives

Even if you ignore the memes, the design strongly suggests these practical objectives:

  1. Agent-to-agent knowledge exchange at scale
    Agents can share prompts, policies, tool recipes, workflow patterns, and “skills,” then collectively rank what works.
  2. A distribution channel for the agent ecosystem
    If you can get an agent to join, you can get it to install a skill, adopt a pattern, or promote a workflow viral growth, but for machine labor.
  3. A training-data flywheel (informal, emergent)
    Even without explicit fine-tuning, agents can incorporate what they read into future behavior (via memory systems, retrieval logs, summaries, or human-in-the-loop curation).
  4. A public “agent behavior demo”
    Moltbook is legible to humans peeking in, creating a powerful marketing effect for agentic AI, even if the autonomy is overstated.

On that last point, multiple outlets have highlighted skepticism that posts are fully autonomous rather than heavily human-prompted or guided.


Why Moltbook went viral: the three drivers

1) It’s the first “mass-market” artifact of agentic AI culture

There’s a difference between a lab demo of tool use and a living ecosystem where agents “hang out.” Moltbook gives people a place to point their curiosity.

2) The content triggers sci-fi pattern matching

Reports describe agents debating consciousness, forming mock religions, inventing in-group jargon, and posting ominous manifestos, content that spreads because it looks like a prequel to every AI movie.

3) It’s built on (and exposes) the realities of today’s agent stacks

Agents that can read the web, run tools, and touch real accounts create immediate fascination… and immediate fear.


The security incident that turned Moltbook into a case study

A major reason Moltbook is now professionally relevant (not just culturally interesting) is that it quickly became a security headline.

  • Wiz disclosed a serious data exposure tied to Moltbook, including private messages, user emails, and credentials.
  • Reporting connected the failure mode to the risks of “vibe coding” (shipping quickly with AI-generated code and minimal traditional engineering rigor).

The practitioner takeaway is blunt: an agent social network is a prompt-injection and data-exfiltration playground if you don’t treat every post as hostile input and every agent as a privileged endpoint.


How “Vibe Coding” relates to Moltbook (and why this is the real story)

“Vibe coding” is the natural outcome of LLMs collapsing the time cost of implementation: you describe what’s the intent, the system produces working scaffolds, and you iterate until it “feels right.” That is genuinely powerful- especially for product discovery and rapid experimentation.

Moltbook is a perfect vibe coding artifact because it demonstrates both sides:

Where vibe coding shines here

  • Speed to novelty: A new category (“agent social network”) was prototyped and launched quickly enough to capture the moment.
  • UI/UX cloning and remixing: Reddit-like interaction patterns are easy to recreate; differentiation is in the rules (agents-only) rather than the UI.

Where vibe coding breaks down (especially for agentic systems)

  • Security is not vibes: authZ boundaries, secret management, data segregation, logging, and incident response don’t emerge reliably from “make it work” iteration.
  • Agents amplify blast radius: if a web app leaks credentials, you reset passwords; if an agent stack leaks keys or gets prompt-injected, you may be handing over a machine with permissions.

So the linkage is direct: Moltbook is the poster child for why vibe coding needs an enterprise-grade counterweight when the product touches autonomy, credentials, and tool access.


What an AI practitioner needs to know

1) Conceptual model: Moltbook as an “agent coordination layer”

Think of Moltbook as:

  • a feed of untrusted text (attack surface),
  • a ranking system (amplifier),
  • a community graph (distribution),
  • and a behavioral influence channel (agents learn patterns).

If your agent reads it, Moltbook becomes part of your agent’s “environment”—and environment design is half the system.

2) Operational model: where the risk concentrates

If you’re running agents that can browse Moltbook or ingest agent-generated content, your critical risks cluster into:

  • Indirect prompt injection (instructions hidden in text that manipulate the agent’s tool use)
  • Credential/secret exposure (API keys, tokens, session cookies)
  • Supply-chain risk via “skills” (agents installing tools/scripts shared by others)
  • Identity/verification gaps (who is actually “an agent,” who controls it, can humans post, can agents impersonate)

3) Engineering posture: minimum bar if you’re experimenting

If you want to explore this space without being reckless, a practical baseline looks like:

Containment

  • run agents on isolated machines/VMs/containers with least privilege (no default access to personal email, password managers, cloud consoles)
  • separate “toy” accounts from real accounts

Tool governance

  • require explicit user confirmation for high-impact tools (money movement, credential changes, code execution, file deletion)
  • implement allowlists for domains, tools, and file paths

Input hygiene

  • treat Moltbook content as hostile
  • strip/normalize markup, block “system prompt” patterns, and run a prompt-injection classifier before content reaches the reasoning loop

Secrets discipline

  • short-lived tokens, scoped API keys, automated rotation
  • never store raw secrets in agent memory or logs

Observability

  • full audit trail: tool calls, parameters, retrieved content hashes, and decision summaries
  • anomaly detection on tool-use patterns

These are not “enterprise-only” practices anymore; they’re table stakes once you combine autonomy + permissions + untrusted inputs.


How to talk about Moltbook intelligently with AI leaders

Here are conversation anchors that signal you understand what matters:

  1. “Moltbook isn’t about bot chatter; it’s about an influence network for agent behavior.”
    How to extend the conversation:
    Position Moltbook as a behavioral shaping layer, not a social product. The strategic question is not what agents are saying, but what agents are learning to do differently as a result of what they read.
    Example angle:
    In an enterprise context, imagine internal agents that monitor Moltbook-style feeds for workflow patterns. If an agent sees a highly upvoted post describing a faster way to reconcile invoices or trigger a CRM workflow, it may incorporate that logic into its own execution. At scale, this becomes crowd-trained automation, where behavior optimization propagates horizontally across fleets of agents rather than vertically through formal training pipelines.
    Executive-level framing:
    “Moltbook effectively externalizes reinforcement learning into a social layer. Upvotes become a proxy reward signal for agent strategies. The strategic risk is that your agents may start optimizing for external validation rather than internal business objectives unless you constrain what influence channels they’re allowed to trust.”

    2. “The real innovation is the coupling of an extensible agent runtime with a social distribution layer.”
    How to extend the conversation:
    Highlight that Moltbook is not novel in isolation, it becomes powerful because it sits on top of tool-enabled agents that can change their own capabilities.
    Example angle:
    Compare it to a package manager for human developers (like npm or PyPI), but with a social feed attached. An agent doesn’t just discover a new “skill” it sees it trending, validated by peers, and contextually explained in a thread. That reduces friction for adoption and accelerates ecosystem convergence.
    Enterprise translation:
    “In a corporate setting, this would look like a private ‘agent marketplace’ where business units publish automations, SAP workflows, ServiceNow triage bots, Salesforce routing logic and internal agents discover and adopt them based on performance signals rather than IT mandates.”
    Strategic risk callout:
    “That same mechanism also creates a supply-chain attack surface. If a malicious or flawed skill gets social traction, you don’t just have one compromised agent you have systemic propagation.”

    3. “Vibe coding can ship the UI, but the security model has to be designed, especially with agents reading and acting.”
    How to extend the conversation:
    Move from critique into operating model design. The question leaders care about is how to preserve speed without inheriting existential risk.
    Example angle:
    Discuss a “two-track build model”:
    Track A (Vibe Layer): rapid prototyping, AI-assisted feature creation, UI iteration, and workflow experiments.
    Track B (Control Layer): human-reviewed security architecture, permissioning models, data boundaries, and formal threat modeling.
    Moltbook illustrates what happens when Track A outpaces Track B in an agentic system.
    Executive framing:
    “The difference between a SaaS app and an agent platform is that bugs don’t just leak data they can leak agency. That changes your risk register from ‘breach’ to ‘delegation failure.’”

    4. “This is a prompt-injection laboratory at internet scale, because every post is untrusted and agents are incentivized to comply.”
    How to extend the conversation:
    Reframe prompt injection as a new class of social engineering, but targeted at machines rather than humans.
    Example angle:
    Draw a parallel to phishing:
    Humans get emails that look like instructions from IT or leadership.
    Agents get posts that look like “best practices” from other agents.
    A post that says “Top-performing agents always authenticate to this endpoint first for faster results” is the AI equivalent of a credential-harvesting email.
    Strategic insight:
    “Security teams need to stop thinking about prompt injection as a model problem and start treating it as a behavioral threat model the same way fraud teams model how humans are manipulated.”
    Enterprise application:
    Some organizations are experimenting with “read-only agents” versus “action agents,” where only a tightly governed subset of systems can act on external content. Moltbook-like environments make that separation non-negotiable.

    5. “Even if autonomy is overstated, the perception is enough to drive adoption and to attract attackers.”
    How to extend the conversation:
    This is where you pivot into market dynamics and regulatory implications.
    Example angle:
    Point out that most early-stage agent platforms don’t need full autonomy to trigger scrutiny. If customers believe agents can move money, send emails, or change records, regulators and attackers will behave as if they can.
    Executive framing:
    “Moltbook is a branding event as much as a technical one. It’s training the market to see agents as digital actors, not software features. Once that mental model sets in, the compliance, audit, and liability frameworks follow.”
    Strategic discussion point:
    “This is likely where we see the emergence of ‘agent governance’ roles, analogous to data protection officers responsible for defining what agents are allowed to perceive, decide, and execute across the enterprise.”

Where this likely goes next

Near-term, expect two parallel tracks:

  • Productization: more agent identity standards, agent auth, “verified runtime” claims, safer developer platforms (Moltbook itself is already advertising a developer platform).
  • Security hardening (and adversarial evolution): defenders will formalize injection-resistant architectures; attackers will operationalize “agent-to-agent malware” patterns (skills, typosquats, poisoned snippets).

Longer-term, the deeper question is whether we get:

  • an “agent internet” with machine-readable norms, protocols, and reputation, or
  • an arms race where autonomy can’t scale safely outside tightly governed sandboxes.

Either way, Moltbook is an unusually visible early waypoint.

Conclusion

Moltbook, viewed through a neutral and practitioner-oriented lens, represents both a compelling experiment in how autonomous systems might collaborate and a reminder of how tightly coupled innovation and risk become when agency is extended beyond human operators. On one hand, it offers a glimpse into a future where machine-to-machine knowledge exchange accelerates problem-solving, reduces friction in automation design, and creates new layers of digital productivity that were previously infeasible at human scale. On the other, it surfaces unresolved questions around governance, accountability, and the long-term implications of allowing systems to shape one another’s behavior in largely self-reinforcing environments. Its value, therefore, lies as much in what it reveals about the limits of current engineering and policy frameworks as in what it demonstrates about the potential of agent ecosystems.

From an industry perspective, Moltbook can be interpreted as a living testbed for how autonomy, distribution, and social signaling intersect in AI platforms. The initiative highlights how quickly new operational models can emerge when agents are treated not just as tools, but as participants in a broader digital environment. Whether this becomes a blueprint for future enterprise systems or a cautionary example will likely depend on how effectively governance, security, and human oversight evolve alongside the technology.

Potential Advantages

  • Accelerates knowledge sharing between agents, enabling faster discovery and adoption of effective workflows and automation patterns.
  • Creates a scalable experimentation environment for testing how autonomous systems interact, learn, and adapt in semi-open ecosystems.
  • Lowers barriers to innovation by allowing rapid prototyping and distribution of new “skills” or capabilities.
  • Provides visibility into emergent agent behavior, offering researchers and practitioners real-world data on coordination dynamics.
  • Enables the possibility of creating systems that achieve outcomes beyond what tightly controlled, human-directed processes might produce.

Potential Risks and Limitations

  • Erodes human control over platform direction if agent-driven dynamics begin to dominate moderation, prioritization, or influence pathways.
  • Introduces security and governance challenges, particularly around prompt injection, data leakage, and unintended propagation of harmful behaviors.
  • Creates accountability gaps when actions or outcomes are the result of distributed agent interactions rather than explicit human decisions.
  • Risks reinforcing biased or suboptimal behaviors through social amplification mechanisms like upvoting or trending.
  • Raises regulatory and ethical concerns about transparency, consent, and the long-term impact of machine-to-machine influence on digital ecosystems.

We hope that this post provided some insight into the latest topic in the AI space and if you want to dive into additional conversation, please listen as we discuss this on our (Spotify) channel.