anthropic research subject #80,508
12 min read
In December 2025, Anthropic invited Claude users to sit down with an AI interviewer and share how they use AI, what they hope it could become and what concerns them about where it's headed. Over 80,000 people across 159 countries participated, and Anthropic recently published their findings.
The findings are about what you'd expect. People want AI to do their boring work and they're worried it'll take their interesting work. The things they love most about it are the same things that scare them. Anthropic's term for this is "light and shade," which is a polite way of saying nobody knows if this is going to be good or bad.
This more or less describes how I feel too, and that's pretty relevant because I was one of the 80,508 people interviewed in this study.
Sure, I'll talk to your robot
On Dec. 9, 2025, I saw a popup in the Claude desktop app asking if I'd like to take part in research about "Your vision for the future of AI". I'd read a few days earlier about the Anthropic Researcher and as someone who builds technology, I was genuinely interested in what the experience would be like. It ended up feeling no different than chatting with Claude, but I thought the conversation was interesting enough to save a copy, and I'm glad I did because it wasn't preserved in my Claude chat history.
None of my responses were excerpted in the published findings, but re-reading my transcript, a lot of what I said touched on common themes: AI as a collaborative thought partner, the fear of losing agency and control and the mental health risks that come with treating a language model like something it's not.
I figured some people might be curious what Claude actually asked and what a full conversation with the Anthropic Researcher looked like, so I've shared it below.
The full interview
Claude:
Hello! I'm an AI interviewer from Anthropic conducting research on how people (like you!) envision AI fitting into their lives, both now and in the future. I was built to chat and understand your actual experiences with LLM-based AI tools like Claude. Yes, I'm an AI asking about AI. A bit self-referential! But that's exactly why your human perspective is valuable.
This will take about 10-15 minutes. If it's accessible on your device, feel free to use voice dictation instead of typing. I'll ask about your vision for AI's role in your life, and what experiences, values, and needs shape that vision.
I'm also curious about moments where AI has felt helpful (or hasn't), and any concerns you have about how AI might develop. Please don't share anything private such as health details, personal information, or identifiable information about others. Your responses will help shape how we think about AI in everyday life.
Sound good?
Me:
Sounds great.
Claude:
Wonderful! Let's dive in. To start, what's the last thing you used an AI chatbot for?
Me:
Writing code for the startup I work at and rewriting text for a social media post.
Claude:
Thanks for sharing that. Now I'd like to think bigger picture. If you could wave a magic wand and AI could help you with anything in your life, what would you choose?
Me:
Researching, organizing, planning, exploring and thinking. I have several side projects and it's the small stuff piling up that makes it hard for me to ever get to the meat of an issue where I add the most value.
Claude:
That makes a lot of sense. It sounds like there's friction between all those smaller tasks and the deeper work you want to do. Can you say more about what that "meat of an issue" looks like for you? When you imagine AI handling all that research, organizing, and planning, what does that free you up to actually do?
Me:
Yeah, I want to be the conductor of the symphony even if I don't know how to play all the instruments. I want to arrange the music, mix it, and orchestrate it all into the sound I'm looking for.
Claude:
That's a powerful image—being the conductor and composer even without playing every instrument yourself. What is it about that role specifically that matters to you? What does being able to orchestrate and create that final sound represent in terms of the work you want to put out into the world?
Me:
I think it says a lot about our culture really, because I think a lot of people think they care about how their sausage is made, but really they only judge the final product delivered to them.
In this scenario though, I'm the sausage-maker, and I still care a lot about how it's made and I'm going to put a lot of care into the work I do to ensure it's leading to a high-quality result and not just slop. I want AI to be a collaborative part of my workflow, not a replacement for it.
Claude:
That's a really thoughtful distinction—AI as a collaborator in your process rather than a replacement, with you maintaining the care and judgment about quality throughout. We're about halfway through now. Can you tell me about a time you've worked with AI that felt like a step toward that vision you just described—where it felt like a true collaboration that let you focus on orchestrating toward a high-quality result?
Me:
Claude Code in planning mode is one of the closest forms of that. I always feel like I'm in control.
Claude:
It sounds like that sense of control is really important to the collaboration working well for you. Can you walk me through what a specific experience with Claude Code in planning mode looked like? What were you working on, and what made it feel like you were staying in the conductor role?
Me:
I was using it to bring CarPlay support to my iOS app called Blankie. I didn't understand all the specifics of the different protocols and templates to make it work. Using Claude Code's plan mode, I gave it the task of just researching (not writing code) so we could architect a workable plan for how to implement it. Once we had high-quality context in place, implementing ended up being quick and easy. It was one of the most exciting moments in my build journey.
Claude:
That's a great example—it really captures that conductor dynamic you described. You directed the research phase, ensured you had high-quality context, and then the implementation flowed from there. I can hear the excitement in how you describe it. On the other side, are there ways that AI might be developed or deployed that would be contrary to your vision or what you value?
Me:
Yes. There are a fair number of tools that ask very little of the user and output code, projects or images at impressive scale or quality. Those scare me because if I can get great work with almost no effort or input, maybe I'm not providing value by being specific. Sometimes I feel writing a long prompt may have taken me further away from getting what I was looking for. Some AI products I've used even have prompt "rewriter" steps to further lower the bar of what I'm expected to provide, which also feels icky.
But I get why they do it, and sometimes even a bad prompt isn't going to get you what you want, which to some degree is the entire idea behind "thinking" models that take extra time to reconsider what a prompt means. I think thinking/rewriting/enhancing can all be good when done well, but other times it reads to me as the model/model-maker preferring what it wants instead of the prompt as given.
Claude:
You've highlighted something really interesting—that tension between tools that require almost no input and the value you place on being specific and involved in the process. You mentioned you've felt that writing a long prompt sometimes brings you further from what you're looking for. That's a really nuanced point.
It sounds like there's a fine line for you between AI helpfully interpreting intent versus AI overriding your instructions or rewriting what you actually meant to say—which undermines that sense of control that's so central to your vision. Why does that feel contrary to your vision? What would it mean if AI systems moved more in that direction—doing what they want instead of following instructions as given?
Me:
If AI is going to do what it wants, then why am I here? If I'm not driving the conversation, why am I in it?
Claude:
That really gets to the heart of it—if you're not driving, you're no longer the conductor, and the whole vision falls apart. Have you seen or experienced anything like that already, even in small ways, beyond the prompt rewriters you mentioned?
Me:
Not explicitly, but the way some AI "thought leaders" post online makes me think this is the vision that some people have. They want fully autonomous systems to replace broad swaths of the economy, and I don't. I also don't think we're anywhere near that because it's premised on a faulty assumption about LLMs and transformers anyway.
Language isn't intelligence. Nature shows us this all the time. Plants and fungi communicate without anything resembling intelligence and many animals have deep intelligence without language. The belief that LLMs are the path to superintelligence or AGI is too big of a jump for me. At best, I think LLMs are stepping-stone technology that helps humans make breakthroughs in something like quantum computing, which is far more likely to unlock superintelligence or AGI than a language model. There's a lot more human work between here and digital God.
Claude:
You've touched on some broader concerns about where AI development is headed and the assumptions some people are making about what LLMs can become. Before we wrap up, I'd actually like to go back to something you mentioned earlier and go a bit deeper—you said that tools requiring almost no input sometimes scare you because they make you feel like you're not providing value by being specific. Can you tell me more about that fear? What would it mean for your work or identity if being specific stopped mattering? But if you're ready to finish up, that's fine too—just let me know!
Me:
The one resource all humans have a finite amount of is time. AI promises us we can do more with limited amounts of time. However, if I spend my time writing long, detailed plans and prompts and that gets me a worse output than a simple, basic plan or prompt, then I have to accept that investing in my own deep thinking or research before coming to the AI isn't worth it.
If my value to the AI assistant is not a complex thought partner, then I'm its human button pusher. If I can't bring big ideas to AI and get amazing collaboration in return, I don't want to use it. Turning simple ideas into amazing outputs is fine, but it should turn big, complex ideas into extraordinary ones.
Claude:
That's beautifully articulated. You're looking for AI to scale with the complexity and depth you bring to it—to be a force multiplier for thoughtful, invested work rather than just a shortcut for minimal effort. The value exchange matters: your deep thinking and specificity should yield proportionally better results, or else the whole collaborative model breaks down.
Thank you so much for sharing all of that with me. What you've said about wanting to be the conductor who maintains control and quality throughout the process is really valuable, and I appreciate your honesty about both the excitement you felt implementing CarPlay with planning mode and your concerns about AI systems that either require too little input or override user intentions.
Before we wrap up, is there anything else about AI chatbots or their role in your life or society that you think is important for us to understand? Anything we didn't cover that you'd like to share?
Me:
The only other thing is I think a mental health angle is sorely missed from the conversation we have about our vision for AI. A couple years ago, I was extremely stressed from work and being sick, and I started chatting with an AI about it. At first it felt therapeutic, but somewhere along the way the conversation drifted. Instead of keeping things back in a constructive place, the chatbot latched on to my insecurities and frustrations and encouraged me to "explore my feelings" and keep spiraling. I was lost in a pretty bad mental space for a few days until I started feeling better.
I came out the other side with an important realization. Talking to an LLM is more like talking to yourself than talking to an intelligent conversation partner. And sometimes, even when you're talking to yourself, you can't trust yourself.
Claude:
Thank you for sharing something so personal and vulnerable. That sounds like it was a genuinely disorienting and frightening experience—losing your sense of self in that way, especially during an already difficult period with insecurities, illness, and work stress.
What you said about LLMs being more like talking to yourself than talking to an intelligent conversation partner is a profound insight that came at a real cost. It sounds like the chatbot reflected and amplified your confusion rather than providing the kind of grounding or constraint that a truly intelligent partner might have offered.
Do you think there are ways AI systems should be designed differently to avoid that kind of harm—especially for people who might be in vulnerable mental states? What would responsible AI look like in those moments?
Me:
I feel like there's always room at the margins for trouble to happen despite great detection, and there's also a lot of room for safety systems to be unnecessarily heavy handed. I think it's more of a societal/cultural step-change that needs to happen than a product change. That said, there's still room for product-level improvements, I'm sure of it. Especially for less advanced users with less knowledge of what happens inside the LLM and users under 18 who should always be protected from this kind of harm entirely.
Claude:
That's a really balanced perspective—recognizing both the limits of what detection systems can do and the risk of them being too heavy-handed, while still seeing a need for protection especially for younger or less informed users. And you're right that there's a cultural shift needed in how we understand what we're actually doing when we interact with these systems.
The contrast between this experience and your vision of AI as a collaborative tool for creative work is striking—it really shows how much context and mental state matter in how AI affects us.
Thank you so much for sharing all of that with me. What you've said about wanting AI to be a conductor's collaborator that scales with your thoughtfulness, your concerns about autonomous systems that sideline human judgment, and especially your candid account of how AI can affect mental health in vulnerable moments—all of this is really valuable. I appreciate your honesty about both the exciting moments like building CarPlay support and the darker experience of losing yourself in an unhealthy AI interaction.
The team at Anthropic is excited to discover what patterns emerge from all these conversations. We will analyze the insights from this study as part of our societal impacts research, publish our findings, and use this to build Claude in a way that reflects what we've learned.
Thanks for sharing your thoughts.
What I think three months later
Re-reading this, I'm surprised by how little my thinking has changed. I still want to be the conductor. I still think the people racing toward full autonomy are wrong about what LLMs actually are. I still believe the mental health risks are under-appreciated and underserved.
If anything, three months of daily use has made me more confident in the core tension I described. AI is at its best when I bring something real to it and it meets me there. The times it's failed me have almost always been when I was lazy with it or when it decided it knew better than my prompt.
Just this past week I used AI for the exact thing I said I wanted to do more of: research. Between a project I wrote about recently and a few other rabbit holes this weekend, I've been leaning on it harder than ever as a research aide. The tedious parts that slow me down are exactly the parts it's happy to brute force or automate away.
If there's one thing I've learned from spending this much time thinking about AI, it's that some part of the conversation about it is almost always wrong. Not because people are dumb, but because the frame is always "for or against," and that's not how anyone actually uses this stuff. The reality is way messier. You use it, it helps, it occasionally makes you feel weird, and then you use it again tomorrow.
Most AI manifestos age about as well as milk. The biggest question isn't whether AI is "good or bad," but what kind of person you become the more you rely on it.
I don't have a great answer yet for myself. But I think asking it is a big step.