Currently, there is a policy that prohibits AI-generated content. However, as long as users take the time to thoroughly review and ensure that the AI-generated content accurately reflects their intended message, I don’t see an issue with copying and pasting content from AI tools. If, after reviewing the content, the user is unsure about its accuracy, they can simply mention that in their response. For example: “I asked ChatGPT this question, and here’s what it said. It seems accurate, but I’m not entirely sure: … .” I’ve personally found that responses like this are often still helpful.
Rather than banning AI-generated content, the policy could be revised to encourage responsible use. Users could be advised to carefully review any AI-generated content before posting, ensuring it aligns with their intent or disclosing if they are unsure about its accuracy.
Draft of Updated Policy:
- Use AI-generated content responsibly.
AI-generated content is allowed as long as it accurately reflects your intent and adds value to the conversation. If you are unsure about the accuracy of the AI-generated content, please disclose that fact. Repeated failure to follow this policy may result in content removal or account removal.
Would updating the policy in this way make sense, or are there other reasons for banning AI-generated content that I may not have considered?
1 Like
Thanks for flagging this, @elliotwaite! I don’t see an issue with quoting ChatGPT or other AI tools, as long as users are disclosing the source of the content in the post, and I’m happy to clarify that in the guidelines – e.g.:
- Don’t post AI-generated content without acknowledgement that it is AI-generated.
Help us create a genuine, fun community by being yourself! If you feel that AI-generated content would add value to a discussion, be sure to disclose that the content is AI-generated (e.g. “I asked ChatGPT this question, and this is what it said: …”). Posting AI-generated content without proper acknowledgment will result in a warning and deletion of the content after the first offense, and may result in removal of your account after multiple offenses.
3 Likes
I think what the AI was used for is important. Using it to do a grammar pass over a post after you write it is different than asking chatgpt how to tell you how to design a type of library and copy/pasting it into a post.
4 Likes
This policy update seems like a good starting point. Personally, I’d prefer a more relaxed approach that only asks users to carefully review and take responsibility for any AI-generated content they post. I think requiring disclosure of all AI-generated content might just add unnecessary noise. That said, I understand opinions on this may vary.
Out of curiosity, what’s the reasoning behind requiring disclosure for every use of AI-generated content?
Notice: This post includes AI-generated content.
I think one would choose AI generated content only to fix grammar/typos/presentation especially if they’re non-native english speakers.
Instead of disclosure everytime, is it possible to flag posts as " possibly AI generated" by readers. Addressing such posts would be easier for the Moderators too.
My radar for AI generated posts
- obvious ideas are re-iterated
- show signs of delusion
- shows no vulnerability / gap in knowledge
Would like to know others ideas.
The main issue I’m hoping to avoid is technical troubleshooting threads that are full of AI-generated advice and code snippets which sound very believable, but aren’t correct or up-to-date. In my experience, this creates a really frustrating experience for users who are looking to get advice or troubleshooting help from the community.
1 Like
I see, that issue does seem frustrating. Perhaps those specific scenarios could be included in the policy, along with @shashankp’s suggestion to allow users to flag irresponsible uses of AI-generated content. I asked an AI to suggest a policy based on the discussion so far, and here’s a version it came up with:
AI-generated content is allowed as long as it reflects your intent, adds value to the discussion, and has been reviewed for accuracy—especially for technical advice or code snippets. If you’re unsure about its accuracy, or if AI played a significant role in your response, please disclose that (e.g., “This was suggested by ChatGPT, and it seems correct, but I’m not entirely sure.”). Users can flag posts they believe are problematic or misleading due to AI content, and moderators will review these posts. Repeated violations of irresponsible use of AI-generated content may result in content removal or account suspension.
This seems like it could work well, but I’d be interested to hear your thoughts on it.
Context matters a lot – particularly for technical troubleshooting threads, I think that folks should be required to disclose. We can make the guidelines differentiate between different use cases if that helps:
Don’t post AI-generated technical solutions without disclosing the source.
If you feel that an AI-generated code snippet or technical solution would add value to a discussion, be sure to disclose that the content is AI-generated (e.g. “I asked ChatGPT this question, and this is what it said: …”).
Otherwise, you’re welcome to use AI to edit or supplement your posts, but posts that are obviously AI-generated, don’t include a disclosure, and don’t add value to a discussion will be removed. Repeatedly violating this guidelines may result in removal of your account.
1 Like
I like it. It seems to address the main concerns raised so far.
I still think it could be made less strict by not requiring disclosure for technical solutions or code snippets that have been carefully reviewed and verified by the user, as such requirements might unintentionally add unnecessary noise to discussions. However, I’m open to the idea of starting with a more cautious policy.
I’d be okay with moving forward with your proposed policy, and it can always be refined further in the future as needed.