Allow me a personal message today after taking the summer months off to sort things out.
In the 2024 kick-off blog post, I shared my plans to dedicate some time to learning in the fields of sustainability, systemic organizational constellation, and Artificial Intelligence. I am quite proud that so far, I followed through, by completing the first SDG Academy course in February, joining an online systemic shift summit and a systemic constellation masterclass in March, and in parallel, all the time following AI influencers, podcasts, newsletters, webinars, and online courses.
The vast amount of input over the past few months has eventually informed my path forward, which will focus on sustainability storytelling and AI education, both through a systemic lens.
For the last decade I have been working as a service provider, delivering ideas, concepts, dramaturgy, and facilitation on demand. I still love this job, supporting those with communication tasks and projects, by providing my creative inspiration, resources, or facilitation and hosting skills (you're invited to contact me if you find yourself in this situation 🙂)

But for the first time in my lifetime (over fifty years, meanwhile), I feel like creating and selling creative content solutions without an inquiry or contract. This has all happened naturally along the way, but it still feels super exciting!
In my earlier blog posts on AI, I stated the big gap in knowledge of the technology and its adoption, especially here in Europe and in the crowd that will be majorly exposed to it: GenX knowledge workers. I belong to this group myself, but different from many peers, from the winter of 2023 on I was all in for an early and deep relationship with the new technology.
What fascinates me beyond my regular curiosity for anything:
- the fact that on an individual level, it provides 'superpowers', extending my capabilities and allows to act more generalistic
- a completely new way of human-machine interface, that we all need to learn and master and that we partly can shape ourselves (and need to, in this emerging phase, and for our direct environment!)
-
the dynamic that it brings up for teams, organizations, and services, and the evident mindset shifts to go through
While learning, exploring, and trying to understand the implications, I was asked several times to share my knowledge and thoughts, in a private and
professional context.
Say hello to My new brand, products, and services

That's why in the last few weeks I decided to
develop HeartMindMachine as a sub-brand for Sandra Herz Impact Communication and offer
targeted AI courses and workshops for individuals, corporate teams and agencies.
I am aware that there is already a vast offer of courses and content in the education market, but
- not a lot available in the German language (hence I will start in my mother tongue, let me know your wishes regarding courses and workshops in English language HERE) and
- mostly focusing on the technology from a practical or strategic point of view, but I feel we also need to look into the human systems - individual and corporate.
How do we FEEL about this new colleague, (generative)AI?
Can we trust, rely, and feel safe in the extended human-machine teams, and if not, how does this affect our inner system and way of working?
How do we define in teams and communities, what to use, how, and when?
Do we feel 'augmented' and 'empowered', as we are promised, or rather steamrolled, as many feedbacks suggest?
What is on your mind? What do you miss right now in terms of honest and constructive conversation, and informed approach?
Help me by answering a few short questions in my survey and stay tuned about HeartMindMachine coming to life.
WHAT INSPIRED ME LATELY
As mentioned above, I digest 2-3 hours AI-related stuff daily, and the pace of development seems to not slow down (I need to find a better sleep routine soon 🫣)
Three content bits that I found especially valuable:
🤯1 The conversation between Section school founder Greg Shove and
learning expert Dr. Philippa Hardmann.
The QUOTE: Hardmann "so far we have had to choose between effectiveness or reach. If we measure the success of online async courses by
reach and access metrics, online async courses have been a huge success. If we measure their success by impact on learners, they have failed (and continue to fail) dramatically."
TLDR: by effectively tailoring the learning experience to the learner's comprehension levels and preferred learning modes, AI could enhance the overall learning experience. Yet to be proved whether the 'always on' effect and maximum personalization leads to increased “stickiness” and higher rates of performance in assessments.
I wonder: Why do we feel different about human and synthetic teachers? Are 'learning AI skills' and 'AI learning experiences' the same, and how can both be provided best? Will there be new ways to substitute the 'lack of real teachers' presence' in async learning and what is the role of a 'real people' learning community?
🤯2 Ethical considerations, including privacy, are a big issue when talking about AI. News corporations suing big tech companies, copyright debates, and movie industry stakeholders' concerns are emblematic of all the questions coming up when machines become powerful players and substantially start changing systems.
In many ways 'right' and 'wrong' are hard to define, from my POV, and the courts will not be able to fully help us. Not only 'makers & shapers' (aka technology providers and decision-makers in politics and corporations) should consider consequences, but also technology users need to be better informed to act responsibly.
Innovating with AI founder and CEO Rob Howard wrote about this HERE.
🤯3 Finally, for everyone interested in the global AI regulation efforts, I recommend this podcast episode of Humanity Unchained (Apple Podcasts, Spotify) about the US Government's AI Regulation RAAIA Act, in the context of the EU's AI Act, and Post-Labor Economics.
Admittedly, all speakers in the podcast are on the tech side, but they explain the big questions in a quite neutral way.
I recently attended further sessions, one on Maven - Building Responsible GenAI Products with Google PM Mahesh Yadav - and another one by ARIC with PwC Trustworthy AI expert Hendrik Reese. Especially for Europe, every company should prioritize to develop AI governance asap, but different from other digitalization topics this takes place in a fast chaning environment and can/should not be led by IT alone.
What are your thoughts on AI?
How can we make the most of this incredible time by embracing the good things AI can do while preventing any negative aspects from taking over the show? Let me know!
Write a comment