Please send me your thoughts!
Your messages will get sent to my phone and I’ll get a notification so I can respond right away if I’m free. If you have something too long to fit here, consider sending me an email.

This chat widget is shared across all pages on the site where it is enabled, so I can’t tell what page you’re on—if you have comments about a certain page or post, let me know which one you’re reading. If I don’t respond right away, you can close this page and check back later. Your session will end if you clear your cookies for this site.

The only information I have about you is a randomly generated session identifier which is created at the start of your session.

On Academic LLM Usage

Our existing intuitions about how to be a good student apply to LLMs very seamlessly. There’s no ethical dilemma about academic honesty posed by LLMs which didn’t already exist for anyone who had access to a smart person who is willing to try to do whatever you ask of them.

Of course, some cheaters may find it a lot easier to cheat, since no one is watching their LLM interactions. Asking a smart person to help you cheat requires you to trust that person somewhat and involves some risk that they will refuse to help or judge you for cheating. But your own context dependent sensibilities around what is right are unchanged by the human or nonhuman nature of your assistant.

When people say things like ‘don’t copy and paste what the LLM wrote’ or ‘make sure the ideas are your own’ it’s a little boring because these aren’t Golden Rules of all LLM use, they are rules that apply in lots of academic contexts and aren’t at all specific to LLMs.

We can and should think about what it means that everyone might soon (already do in limited cases) have access to LLMs which are far smarter than them and will be able to write about ideas at a level which they cannot keep up with. Imagine your average English major using ChatGPT to write python code. They probably won’t know how to run it, let alone understand it or explain it well themself. We may soon see AI which can put every human in the position of the English major for most topics.

This would be roughly like everyone having a personal expert in every topic for which enough training data exists who does whatever you tell them to do.

It’s not hard or interesting to think about what is and isn’t cheating in this context. However, access to an expert who will help even if accepting its help is cheating will for some be a serious temptation to cheat. If you believe that it is important to design academic systems which disincentivize cheating, then we need to take the existence and capabilities of LLMs very seriously. Once LLMs reach a certain level of capability, students should not be trusted to uphold the honor code and should be evaluated in a manner such that they cannot pass off the work of an LLM as their own.

To summarize: what is and isn’t right is obvious here. The best way of making students do the right thing might have changed a little bit already and will likely change more in the future, as LLM-enabled cheating becomes easier.


P.S. Today I heard someone mention a concern that using LLMs might homogenize thought. This is interesting - it’s something we can try to test in different domains. Give a set of people a creative task, give just some of them access to AI, compare results. It’s mostly or completely a model capabilities thing - how versatile is the model? But this might be a real challenge in some ways. Certainly right now, a given model has a particular writing style and if everyone wrote everything with Claude then writing styles would be way less diverse than they are now.

This person went on to express the worry that since some cultures are overrepresented in the LLM training data, using these LLMs silences underrepresented cultures. Unless there is a compelling argument which I haven’t heard, this is dumb - it’s exactly like saying that since some cultures are underrepresented at Williams College, studying there silences certain voices. There’s nothing specific to AI here that I can see.

In general, if you are worried about something with regards to AI, explain what about the problem is unique to AI. If it’s a problem you’re worried AI might exacerbate, describe how AI might do that and what to do about it. If the ‘what to do about it’ isn’t specific to AI, then don’t bring it up - you’re just bringing up your unrelated pet concern and distracting from the conversation.