View: 32

What the Google AI Fellowship Actually Taught Me About EdTech

The Google AI Fellowship gave me more than a credential. It gave me a completely different set of questions to ask before I ever recommend a technology to a...
AI Education

I want to be honest about something: when I was accepted into the Google AI Fellowship, I thought I already knew a lot about AI in education. I had been in classrooms for fifteen years. I had built programs. I had watched students do remarkable things with technology. I figured the fellowship would sharpen what I already had.

It did. But not in the way I expected.

The Credential Was Not the Point

The credential matters for conversations with funders, school districts, and partners. I am not going to pretend it does not. But the real shift happened inside my thinking, not on my bio page.

Before the fellowship, I evaluated EdTech tools the way most educators do: Does it work? Is it engaging? Will teachers use it? Those are reasonable questions. But they are incomplete ones, and I did not fully understand how incomplete until I started sitting with researchers, product teams, and practitioners who were asking harder things.

The Questions I Ask Now

The fellowship pushed me to ask questions I had been skipping. Here are the ones that changed how I work:

  • Who built this, and who did they build it for? Most EdTech tools are built with a specific student in mind, and that student is usually not a kid from a Title I school. The data sets, the use cases, the default assumptions. I look at those now before I look at the feature list.
  • What does this tool assume the teacher already knows? A lot of AI tools fail in real classrooms not because teachers are not capable, but because the product assumes a level of infrastructure, prep time, or prior tech fluency that does not reflect most public school realities. I taught in those classrooms. I know what that gap looks like.
  • Where does the AI end and the pedagogy begin? This one is critical. A tool can generate a thousand differentiated reading passages, but if there is no pedagogical framework around how a teacher uses them, the AI is just doing busy work. The technology is not the lesson. It is at best a resource for the lesson.
  • What does success actually look like for this school, not the vendor? Vendors measure engagement metrics. Schools need to measure whether students are learning, whether teachers feel supported, and whether the technology is creating more capacity or more work. Those are different scorecards.

What This Changed in My Work at WCT

We Create Tech serves students across fifteen states and five countries. Our CreativX Lab program puts AI tools directly in the hands of students who are building things, creating things, solving problems that matter to their communities. After the fellowship, I audited every tool we use through these new questions.

Some things we kept. Some things we cut. A few things we built ourselves because nothing in the market fit what our students actually needed.

I also got clearer about what I will and will not endorse publicly. The fellowship introduced me to enough behind-the-scenes product development to understand that not every tool with a good marketing slide deck has the evidence base to back it up. That matters when schools are making purchasing decisions with limited budgets and no margin for error.

The Practical Takeaway

If you are an educator, administrator, or nonprofit leader evaluating AI tools right now, here is the one thing I would tell you: slow down on the demo and speed up on the questions.

Before you schedule a product demo, write down three things: who are your students, what do your teachers actually have capacity for, and what problem are you trying to solve that is not being solved now. Then take those three things into every conversation with an EdTech vendor.

If the vendor cannot speak directly to your context, that tells you something. If the tool requires your teachers to become developers or data scientists to use it well, that tells you something too.

The fellowship did not make me more excited about AI in education. It made me more precise about it. Precision is what our students deserve. They have been the subjects of enough well-meaning experiments. They are ready to be the architects of something real.

That shift in how I think is worth more than any credential on my wall.

Shana Sanders

Leave a Reply

Your email address will not be published. Required fields are marked *