Companion bots highlights two equally important realities, IMHO,
Humans are still highly social creatures that require social and emotional activities to remain healthy.
If you are making a chatbot, bring a social psychologist onboard to ensure your app does no harm.
Number 1 is easy to wrap our heads around.
Number 2 provides an opportunity to prevent unintentional myopic thinking, echo chambers, and IRL isolationist behavior (e.g. opting to only engage with chatbots rather than live humans).
Why is this important?
Myopic thinking and isolation fosters “othering” behavior, which leads to hate and bias. These things left unchecked have never proven to be a good thing.
So, if you are building chatbots, use your powers wisely.
tl;dr: The rise of AI Agent Development is reshaping how we develop software, create professional development pathways, and teach coding.
A recent job post from Firecrawl caught my eye: they’re offering $5,000/month to “hire” an AI Agent—built by someone else—to autonomously perform content creation tasks (Firecrawl Job Post).
I’ve been thinking a lot about this whole “people who build AI Agents” thing, and this job post validates it: a new software stack is emerging, and it’s the AI Agent Development. We’re entering an era where AI Development is becoming a distinct discipline—sitting right alongside Front-end, Back-end, and Database development.
Firecrawl is leaning into this future by treating AI agents as first-class contributors—products in their own right, not just tools. They’re not only contracting these agents, but also providing a potential pathway for their human creators to join the company full-time.
It’s a bold signal: the people who build and orchestrate AI systems are becoming central to modern tech teams.
This shift changes everything. Professional development (PD) for developers will need to evolve, accounting for developing, integrating, collaborating with, and managing autonomous AI systems. The same goes for how we teach programming—skills like prompt design, agent chaining, and orchestration will become table stakes.
Firecrawl’s post is a glimpse into a future that’s already here.
I was reminded of Dr. Buolamwini’s words when I read about a study finding gender bias emerging from some LLMs. The tl;dr of the study:
If queries presented female (i.e. name, language, or impression), responses were simplified or redirected to less technical stuff (e.g. design over coding).
If queries presented male, responses includes more detailed steps and technical language (e.g. jargon).
Bonus-Ugh: If queries presented female, responses were 23% more likely to include phrases like “Don’t worry if this seems complicated.” Aka, the LLM assumed a more emotional response from the user.
”Women (were) mainly assigned job titles such as graphic designer, fashion designer, or nurse and men assigned job titles such as software engineer, architect, and executive… ChatGPT has a hard time associating male pronouns with nurses and an even harder time letting female pronouns handle a pilot’s duties of getting a plane ready for landing.”
Why Is This? Our collective unconscious lives (and thrives) in the training data. Genders are associated with certain jobs because the source material does so. The associated responses to technical questions is because of the training data.
What Can Be Done? All three of the below are critical to improve our collective training data:
Be aware of what you put online. Remember, LLMs often use web content to train on.
Report what you find. Most LLMs have a reporting mechanism. Use it.
Support an organization. Some organizations include:
Distributed AI Research Institute (DAIR)
Center for Responsible AI at NYU
Montreal AI Ethics Institute
References: Buolamwini, J. (2023). Unmasking AI: My journey to hold AI Accountable. Penguin Random House.
Emelyanov, A., & Chuprina, S. (2025). Ethical and security aspects of multimodal foundation models. Array, 19, 100295. https://doi.org/10.1016/j.array.2025.100295
Kennedy, P. (2024, March 22). New study finds gender stereotypes persist in ChatGPT. TechXplore. https://techxplore.com/news/2024-03-gender-stereotypes-chatgpt.html
Problem: Digital-device attention span has dropped to ~40 seconds within just the past 8 years (Duke, 2023).
Solution (for IDs and LDs): Context Challenge Activity Feedback
Dr. Gloria Mark’s research indicates that our screen-based attention spans has dropped from ~75 seconds in 2012 to ~40 seconds in 2020 (Duke, 2023). However, that’s only part of the story. True, there’s a reduction in average attention durations on digital devices, but other things are true too: binge-watching is on the rise (cite), we spend more time on our devices than before (cite), etc. So, what’s the deal?
Information is in abundance, and distractions abound. So, how do Instructional and Learning Designers prevent attention drifting? Answer: Michael Allen’s Content Challenge Activity Feedback (Allens Interactions, 2021).
Context Challenge Activity Feedback
As IDs and LDs, we know behavioral changes require us to define what learners should be able to do by the end, and we know the knowledge learned along the way is a natural part of the scaffolding. CCAF takes it a step further by giving learning a purpose: it wraps activities within the a context and a challenge (or puzzle).
It answers the question why do these things.
1. Context
The situation, or environment, in which learning occurs. Depending on the learning need, this may include some background and relevant circumstances.This way, learners relate to the material and understand its application in real-world scenarios, which provides some motivation. Ideally, create a context the learners are deeply invested in already.
2. Challenge
Allen defines this as the problems or tasks learners must confront within the context. However, I like to frame these as puzzles to solve. The goal is to stimulate critical thinking and problem-solving skills. With meaningful challenges, it stimulates active engagement. Pro-tip: highlight the reward that comes with solving the puzzle. This shifts the mindset from solve this “or else” to the mindset of solve this “and get”.
3. Activity
Defined, specific actions or tasks learners do to solve the challenge. Simulations, deliverables, role-playing, and discussions are some common activities. Bonus, keep learners interactive and collaborative to increase a sense of community among learners.
4. Feedback
Clear feedback that celebrates wins and provides guidance areas that need improvement. I emphasized “need” here because it’s human nature to provide areas of improvement regardless of level of proficiency; things can always be better than what they were. The trick here is to only provide areas of improvement that are necessary. fight the desire to point out every areas of improvement.
Final Thoughts
Philosopher Alasdair MacIntyre, (2007, After Virtue, pp. 216) who called humans “story-telling animals”, believed that our ability to tell stories or place moral identify within contexts and scenarios, is what made us truly human. I like this idea that stories are what give us the human condition. Our brains, which learn by comparing and contrasting new things to what it already knows, build themselves on past context — aka stories and memories.
CCAF creates a story wherein the learner becomes the hero of the story. No wonder it works to prevent someone’s attention from drifting.
“It is no longer believed that neurons in the brain are incapable of being regenerated. It was once wildly believed that we are born with our full conpletement of neurons and that new neurons are not gerneated. This idea is now untenable, at least in a region called the dentate gyrus.” ~ Jamie Ward (2015) Student’s guide to cognitive neuroscience.
This means you need to work your brain out to keep it healthy just like your heart. You do this by learning something that’s just beyond the reach of what you think you can do.
Charlotte Danielson’s article in Education Week has been on my to-read list since it came out, and I’m glad I finally got to it. Chalked full of ideas and thoughtfulness, it’s a must read for any administrator looking to create a better culture around the profession of learning.
My favorite spot:
“There is professional consensus that the number of teachers whose practice is below standard is very small, probably no more than 6 percent of the total, according to the Measures of Effective Teaching study and others…
Given this landscape, it makes sense to design personnel policies for the vast majority of teachers who are not in need of remediation. And, given the complexity of teaching, a reasonable policy would be one that aims to strengthen these educators’ practice. Personnel policies for the teachers not practicing below standard—approximately 94 percent of them—would have, at their core, a focus on professional development, replacing the emphasis on ratings with one on learning.”
Yes! Teachers are masters of the social science of learning. Our profession of understanding learning is twofold: one is student-facing and one is profession-facing. It’s like a good psychologist: they know how to navigate a session and what to prescribe for a patient to improve their mental health & at the same time they are constantly learning more about the science of psychology in their field.
Looking forward to new ways to support this profession as it grows into a respected field.
Went to the Vermont Fest 2016 this week, where the best tech and innovation minds in Vermont’s education system come to share ideas. A common theme I heard was differentiation. However, in many sessions, teachers were frustrated that they didn’t leave with enough tools to really help implement. So, I’m sharing my favorite go-to: a differentiation framework I built for myself that helps me check to see if my lesson really does differentiate for students. It’s based on the premise that I give students a survey at the beginning of the year that identifies a few of their foibles, interests, fears, strengths, and weaknesses as learners and a bit more about who they are as people. Armed with that info, this framework becomes my go-to to keep myself accountable.
Enjoy! And if you have questions about how you can do more to implement differentiation in your classroom, reach out. I’m happy to help.
IBM is working with a Texas school district to pilot a one-stop-shop for student data, both qualitative and quantitative. IBM’s powerful Watson is spending the year gathering loads of data entered by systems and humans (like teachers) about the students with the hope of providing insight into students that is more comprehensive than traditional assessment scores.
If this proves to work, this could free up brain space for teachers and staff to take that data and make use of it.
“Education is one of the formative technologies of human civilization, a constructed system of logically ordered parts intended to be the bedrock of social and political advancement.”