So, I finally had a little time and got to try the new (experimental) “notebook” tool by Google called NotebookLM. If I be an AI and generate a picture of what I see it as, or might become, imagine the idea behind OneNote but if it had generative AI with deep learning and multi-output generative support….on steroids. And in theory, the more you use it, the more you supposedly become smarter about you, adapting to your needs—AI processing meta-cognitive notes, perhaps?
“NotebookLM gives you a personalized AI collaborator that helps you do your best thinking. After uploading your documents, NotebookLM becomes an instant expert in those sources so you can read, take notes, and collaborate with it to refine and organize your ideas.”
Google
This Google experimental product is an interesting in the wild research case into the idea. Is it being marketed as a personalized AI research assistant? Or perhaps Google’s idea of how it can learn as you learn, so that it can learn how to better help you learn?
The appeal is there, especially with academia. I just gave a talk to faculty a few weeks ago, positing that a future use case of AI could be as an interface to dense or long-running research experiments. Posing the question
What if you could have a conversation with your notes?
It looks like this is Google’s attempt to open up the idea in its own way.
Companies like Dropbox, Apple, and Microsoft have been building generative AI tools aimed at this idea. Google’s NotebookLM, originally known as Project, looked to rewrite what notetaking could look like if an LLM was at the core and the notebook software was built on top of it. The concept around NotebookLM isn’t just to use an AI to summarize what it knows, but what you know within your notebook, and bring conversation to the sources you have there.
Beyond just Notes or Googling it
I wanted to put one of its probably most unique features to the test, specifically, their audio “podcast” conversion ability. As a podcaster of over 15 years, I’ve hosted hundreds of shows and can tell you that podcasting takes time to develop. From having the initial idea, researching, scripting, and then actually scheduling the time to sit down and record, finding chemistry on the mic if you record as a duo….well, this blows that timeline out of the water.
To test its function, I gave it one single-page PDF and a flow chart nonetheless. This is a usual test I use with AI, as I want to see if its machine vision can parse the ideas of what the blocks represent and not just tell me the obvious, which is some kind of chart. AI is usually pretty good with concrete information, such as a page of text. But this test is not a tomb of academic writing but a pathway diagram behind how WWU supports AI and the academic process within a faculty’s course, which is abstract and requires the ability to make connections. So, with this PDF their option of “make a podcast off it” enabled and that’s it—no additional context, no follow-up commands. And,it was a brand new notebook, so other data within my notebook. In less than three minutes, it produced a conversational podcast summary.
I was stunned! I’ve used several other AI’s for audio (and video) generation, but this blew away my expectations. The tone, fake breaths, verbal fillers, and the artificial chemistry between the hosts were incredibly convincing. What’s floored me and my colleagues that I’ve shared it with is the lack of mechanical readout, although at times if you are looking for it it is still there. Up to this point, Colyssian and Elleven Labs have been my most referenced companies who have really pioneered, if not established, the gold standard in this kind of output; but based on this test, Google might not be too far behind.
To be clear, I have not done a deep dive into NotebookLM, but just in my initial usage, it is clear that it’s more than just a notebook—and clear how it might be a good and a bad tool in how AI might assist in research and learning. If academia was worried about OpenAI being everywhere and in everything and coming after college writing, Google may have just upped the anty.
The Future of Learning: Metacognition and Personalized AI
This brings me to a futurist’s viewpoint: what if this technology could do more than just summarize content? What if it could act as a cognitive partner for students or a researcher? Imagine a scenario where students’ notes could reflect back on their own understanding—performing their own metacognition and actively helping students identify what they know, what they don’t know, and what they need to revisit. Or, better yet, help them find the gaps in their learning or flaws they were blinded to in their research. To borrow Microsoft’s verbiage, it would become a learning Co-Pilot, tailored to the student in their learning journey. But is this bettering education? Or removing the last bastion of human uniqueness; independent critical thinking?
Ok, so, what else can it do?
Again, time will tell on how this experimental product is, and if it is rolled out, and what features it will have or remain free to use. But perhaps one of the single most important tools for most academics is sources, and typically, AI isn’t as overt in how it derives its responses. For academia to be able to validate data is paramount, and it is critical for academics to see where this black box of an AI is getting its information, whether you are a student, researcher, or faculty.
Within NotebookLM a conversation you can have with it is to have it provide Source Guides.
It provides summaries of each document referenced in your notebook, along with key points, and, much like most AI agents, it of course can help you with what to ask next. These guides offer a quick and potentially effective way to sift through large amounts of information gathered and potentially save educators and students time when organizing research materials. This brings me back to my initial statement: what if you could have a conversation with YOUR notes or YOUR research, and perhaps even make your research show your work back to you? Not only could it open up new pathways for discovery, but also make research outcomes more digestible for everyone. Making research less of an arduous task and making it accessible, and alow for more avenues for others to build off of and contribute back to the body of research. Opening up the potential for faster research and results. It reminds me of a statement I heard earlier in the year from an online workshop host who used to teach English in college. It was a workshop on AI in higher education and they were pointing out the pros and cons of it in the field of language and writing. She pointed out that we need to remind students that writing isn’t as much this epic journey to slay a dragon, but rather, it’s more like splitting wood. We only get stronger the more we chop, or in this case, write. Research doesn’t have to be a dragon to slay, perhaps. Perhaps AI can help get us back to the chopping of the wood that makes the research outcomes so important. Perhaps making academia stronger in the process that is research.
Analog / Digital: The Little Grey Cells
With any technological advance, there’s always the contrary side to this futurist’s vision. Could such a tool unintentionally weaken students’ ability to process their own notes and synthesize information? AI tools like NotebookLM might offer streamlined summaries and even perform some aspects of metacognition, but will students lose the vital skills of reading deeply, analyzing texts, and constructing their own understanding of the material?
There’s a risk here of shifting from a “human in the loop” model—where students remain actively engaged with their notes and rely on AI to assist and enhance the process—to a “human on the loop” scenario, where students become passive recipients of AI-generated summaries. In the latter case, students might defer too much to AI systems, potentially eroding their critical thinking and active learning skills. Does systems like this, take out the human experience within education of intellectual inquiry and discovery?
I think this challenge just reinforces the need for strong analog research and note-taking skills. Again, its about chopping wood, not slaying a dragon. In theory, students who can effectively combine traditional research methods with AI-enhanced tools will have a distinct advantage. Being able to leverage AI to perhaps be able to draw out even more from research than they could from straight analog reasoning. Analog critical thinking keeps the AI output in check and valid. I honestly see similarities to the early days of advanced sciences and computer modeling. Sure, we got footsteps on the moon from paper, pencils, slide rulers, and the amount of computing power that most toasters have today. But even with today’s best computers, there is still the need for the skills to work the maths to check computer-generated models. Meanwhile, one can argue that, if anything, computer-based models provide an enhanced level of human protection while humans work on analog maths. Time that can help better predict weather models to save lives, or make a car more safe going down the road with the sensors it has with a human still at the wheel.
For me, this debate isn’t a new one, and will probably a debate that will never die. Outside of academia, as a Master Diver, do I have to take down my no decompress dive tables and an analog watch on my dives? No. And, the modern dive computer that does all that computing has made diving safer in the process. Now, is there still a need to know the “why” and “how” a dive computer does what it does when a diver goes to depth; absolutely. There is still a group of us, small admittedly, who still carry analog gear and a mechanical dive watch as backups because of it. In diving, if there is a failure and that dive computer goes out, the only thing that will keep a diver safe is the analog knowledge and dive skills they have and have practiced. Again, this strengthens the construct of humans in the loop. This dynamic interaction with content could change the way we approach teaching and learning. But a human brain, with all its little grey cells and and practiced knowledge, is ultimately the penultimate tool in cognition and application in the real world.
Students will need the discernment to know when to rely on AI and when to engage in the slow, sometimes messy, but ultimately rewarding process of grappling with their own little grey cell (as Hercule Poirot would remind us). After all, the tool is only as useful as the user who wields it.
You must be logged in to post a comment.