Is Twitter/X viable for that? They can decide, and have, to randomly put information behind login walls.
Is Twitter/X viable for that? They can decide, and have, to randomly put information behind login walls.
deleted by creator
deleted by creator
deleted by creator
If you’re talking ethics, I think the most important thing is that the user controls what their software does. YouTube videos are hosted on the web, and fundamentally people can choose how to display web sites on their own computer. Of course, if YouTube doesn’t like this it’s their prerogative to not host their content like that.
It’s not an ad-blocker, it’s a wide-spectrum content blocker which is necessary for security.
If it were a new platform and somebody wanted to try that I’d at least watch what happens, but Musk has burned through too much credability.
This isn’t a correct answer to your question, that’s why it’s getting downvotes.
My biggest regret was getting rid of a perfectly good portable CRT TV that would have been ideal for pre-7th generation gaming, just as they stopped making good quality CRTs.
I’m about to get rid of my ageing “dumb” TV and not replace it. Everything comes in to my laptop now, so any monitor and set of speakers to plug it in to will do.
My prediction is that this is going to be the end of the line for TVs as stand-alone hardware - just like most people don’t really have stand-alone Hi-Fi systems any more.
OK well I’m not sure where the AppImage “purists” and Flatpak “critics” are but I’ve not really encountered them.
I mean they are two things that co-exist, it’s not like they’re in commercial competition. Flatpak itself is usually distributed as an RPM or deb.
What’s off? That looks like it might be useful.
Yep, the strict marine reserve. But it doesn’t stop the military base from pumping sewage into it, and it doesn’t stop rich people with yachts from going there. Just normal people and Chagos islanders aren’t allowed. Also a difficult thing to note is that this was during a Labour government (which many liberal-minded British people consider a lesser of two evils). The only major politician who intended to do right there was Jeremy Corbyn, but he was slaughtered by the media for being not evil enough.
Yes, I was kind of being rhetorical there, I thought that would be enough to draw attention to what’s going on. Also a new Lemmy account that exclusively links to one unknown website is a big red flag.
Well he’s on Mastodon so I guess that’s your answer.
Why would we attack the author? That seems like an oddly specific request that makes me oddly suspicious of the author, if anything.
I don’t fully agree with OP but I think we could probably do with adjusting some of them. Personally I think with current AI, if somebody composes something by making multiple AI prompts and selects the best result, they should get some kind of authorship because they used a tool to create something.
Detecting whether a student used ChatGPT to write an assignment can be challenging, but there are some signs and strategies you can consider:
Unusual Language or Style: ChatGPT may produce content that is unusually advanced or complex for a student’s typical writing style or ability. Look for inconsistencies in language usage, vocabulary, and sentence structure.
Inconsistent Knowledge: ChatGPT’s knowledge is based on information up to its last training cut-off in September 2021. If the assignment contains information or references to events or developments that occurred after that date, it might indicate that they used an AI model.
Generic Information: If the content of the assignment seems to consist of general or widely available information without specific personal insights or original thought, it could be a sign that ChatGPT was used.
Inappropriate Sources: Check the sources cited in the assignment. If they cite sources that are unusual or not relevant to the topic, it may indicate that they generated the content using an AI model.
Plagiarism Detection Tools: Use plagiarism detection software, such as Turnitin or Copyscape, to check for similarities between the assignment and online sources. While these tools may not specifically detect AI-generated content, they can identify similarities between the assignment and publicly available text.
Interview or Discussion: Consider discussing the assignment topic with the student during a one-on-one interview or discussion. If they struggle to explain or elaborate on the content, it may indicate they didn’t personally generate it.
It’s important to approach these situations with caution and avoid making accusations without concrete evidence. If you suspect that a student used an AI model to complete an assignment, consider discussing your concerns with the student and offering them the opportunity to explain or rewrite the assignment in their own words.
Absolutely, I post much more here because I know actual people will actually read it and may actually respond like they would to an actual human. It’s like the old days of the internet.
SLA? If that means something like “service level agreement” (I don’t know, you didn’t specify, I’m guessing) then I can still find examples where it falls well below what I would expect from a public service such that if there was an agreement in place that I would definitely be opposed to it as a tax payer.
I mean yes obviously, there are much more viable platforms like Mastodon, or even a self-hosted website.