How ChatGPT Study Mode Helped Me Fine-Tune a Foundation Model
By Gemma Lara Savill
Published at August 16, 2025
ChatGPT's new "Study Mode" offers a fascinating approach to learning, moving beyond simple Q&A to a more interactive and guided process. Instead of just getting an answer, the mode uses custom instructions crafted with educators and scientists to foster deeper learning. It encourages active participation through Socratic questioning and self-reflection prompts, managing the cognitive load of complex topics by breaking information into manageable, scaffolded responses. This isn't about finishing a task; it's about building genuine understanding, with personalized support that adapts to your skill level and checks for knowledge retention along the way.
I thought I'd give it a go to finish my project for Udacity Generative AI Nanodegree, I'm on Course 2: Generative AI Fundamentals. I don't want the AI to "do it for me", I want to learn, and if it helps me learn, this is great.
This makes me reflect on how I learn. Usually it used to involve a physical book, reading a chapter, maybe underlining parts with a pencil. Then I would make some notes in a physical notebook. Do the exercises, get stuck, back to the book, etc. Now I use more online, multimedia content, not so many physical books anymore, but I still take handwritten notes on my iPad now, old habits die hard.
Setting up the Study Mode Session
Now it has rolled out to most users, it is quite simple, just select "Study Mode" from the menu under the prompt. If you don't see it, you can just write /study in the prompt, and it will load Study Mode for you.
I then gave the prompt a really long explanation about the project I want to complete. I also gave it a lot of context about what I've already been studying. The idea here is that the more context you give it, the more chances you get of the model actually helping you in the way you expect it will.
After all this, it came up with a step by step plan of how to approach the project, easy to follow.
I already told the model I was planning to do the project in Google Colab, it didn't object, but then it wouldn't. Let's not forget that these AI assitants are not created to be critical, but helpful. If I wanted a discussion or pros and cons I would need to ask specifically, like "what other environments could I use to create a Python Notebook", or "give me pros and cons of using Google Colab or a local VS Code and Python setup to create a notebook to fine tune a foundational model"
The setup instructions were good. It even told me to change the Google Colab "to set runtime to GPU", maybe I would have missed that until it took too long to train.
Planning
Then it offered a plan of steps, with some skeleton comments that I could complete step by step. This I enjoyed, as my time is limited, so if I have it layed out in steps I can leave it and come back to it latter with less effort and a cleaner picture of what I need to do.
The steps it offered were logical and fit in with the course:
- Setup & Install Dependencies
- Imports & Config
- Load Dataset
- Tokenize Data
- Load Base Model & Evaluate
- Create LoRA Config & Apply PEFT
- Fine-Tune LoRA Model
- Load Fine-Tuned LoRA Model & Evaluate
- Compare Results
- Wrap Up: Add any discussion / analysis here
I thought this is quite a good start, it hasn't given the game away, I still have to write the code for my project. But now I am setup with a Notebook layout to complete and can visualize what needs doing. The structure mirrored what I'd learned in the course, which gave me confidence to proceed.
You can read the detailed walkthrough of the project and the code in my follow-up post: Applying Parameter-Efficient Fine-Tuning with Hugging Face.
So after all the setup, the Study Mode prompt took me through each step, so my following prompts were a variation of "Yes, please", "Yes" and "Go", as after each step I was able to understand and take up the prompts offer to progress to the next step
Diagrams
I had several questions for the prompt about the LoRa library, and I got offered a diagram! I love diagrams for learning purposes. It took some time to produce the image, but was worth the wait, here you can see what it produced.
This visual was a game-changer. The diagram illustrates the core concept of Low-Rank Adaptation (LoRA) by showing how the technique works under the hood. It helped me visualize that instead of training the entire model, LoRA works by adding new, small, and trainable LoRA weights to the existing model's frozen weights. This process targets key components like the Attention and MLP (Multilayer Perceptron) layers, allowing for a much more efficient fine-tuning process.
Conclusion & Lessons Learned
Using ChatGPT in Study Mode resulted in a step by step tutorial style prompt. The formatting was a lot clearer: lists, icons, short descriptions, all very visual.
Google's Gemini has also announced a "Guided learning" mode, and Claude is also pointing to an educational mode where you get "A thinking partner-not answer machine", I think that is spot on, and what I needed, something to help me think, not give me all the answers.
In this exercise, the ChatGPT Study Mode has helped me really understand the what and the why, the how is not so hard to figure out these days, but if you want to learn something you can apply by yourself, in a way that it will be a tool of your personal toolbox, the new learning modes are a good option.
For the full project details, including the code and results, check out my next post: Applying Parameter-Efficient Fine-Tuning with Hugging Face.
As these new learning modes continue to evolve, it's worth exploring how other platforms are approaching this challenge. Here are some of the resources that inspired this post:
-
OpenAI: Introducing study mode
Read more about the official launch of ChatGPT's new learning mode. -
Guided Learning in Gemini: From answers to understanding
Explore Google's approach to creating a guided, educational experience with Gemini. -
Navigating AI in education together
See how Claude's creators at Anthropic are thinking about AI as a "thinking partner" for learning.