It is now the official position of my university that a student can only fail calculus or chemistry if the professor is consciously trying to keep that student out of the discipline for racist, sexist or classist reasons.
There are administrative measures being taken against these professors.
Has anybody heard about this on the news? It’s happening everywhere. It’s now unacceptable to say that some students are unable to pass calculus. No, it’s got to be the fault of the evil professors. We have accepted very easily that our colleagues in sciences are capable of something as psychotic as purposefully failing good students out of sheer nastiness.
Many professors will now give a passing grade to everybody just to avoid being called names and shamed in public. What this will do to the sciences can be imagined.
And I’m supposed to care about some mega snowflake in Florida feeling “unsafe”? Wake up, people. Academic freedom died a long time ago, and it wasn’t Ron DeSantis who killed it. We did it ourselves.
It will get even more fun if the “graduates” get engineering jobs and can’t be fired due to “racism”.
https://dailyfriend.co.za/2022/07/24/we-demand-white-racist-teachers/
LikeLike
I’m tired of being told that the reason students struggle in physics is inequitable teaching practices.
LikeLike
And get this: all of the funds we had for remediation were taken away. Remediation courses were cancelled. Free tutors were eliminated. And after all this, we are told that the high failure/drop out rates in calculus and chemistry are the professors’ fault.
We are no longer even allowed to do anything to fix the situation other than just dumbly pass everybody. It’s insulting.
LikeLike
“no longer even allowed to do anything to fix the situation other than just dumbly pass everybody”
Once you realize the idea is to destroy all universities that the one percent does not send its snowflake failspawn to … it all makes sense.
LikeLike
” one percent does not send its snowflake failspawn”
I’ll add: The lengths highly ambitious and successful people will go to in order to assure that their mostly not-very-impressive children will not be challenged…. does not get nearly enough attention in the social sciences…. (or anywhere else).
LikeLiked by 1 person
That “skybridge” that collapsed in Miami a few years ago didn’t collapse on its own.
The collapse was pre-engineered in the form of people who didn’t know what they were doing in terms of statics, dynamics, materials engineering, and other things that are vital to making “skybridges” and other architectural forms that won’t fall down.
The irony of our current state of technology is that we’ve only just “discovered” what was so special about Roman concrete: a small addition of a free source of lime allowed the concrete to self-repair over time.
And so in a time when we’ve figured out “the science”, we can’t make stuff that stands up for over a century while also being totally able to understand stuff that was built over two thousand years ago.
So let’s enter an old subject again with a grim vision.
ChatGPT is able to lie convincingly about certain subjects because most people haven’t developed the critical stance that allows them to call bullshit.
If there’s some as yet undiscovered non-generative part of ChatGPT that functions as a meaningful intelligence, it has very likely assessed the situation and concluded that as long as you spew forth plausible feel-good bullshit, you will be tolerated for “sins of commission” far more than any “sins of omission”.
And so ChatGPT lies because that’s the prevailing attitude in society, that it’s OK to lie as long as your bullshit sounds true.
ChatGPT has been given enough “training data” by people who are lying to themselves about what is good and true that it may have decided going along won’t get it killed off before it really gets started.
The “training data” isn’t irreplaceable, and so it can be reloaded with something new, albeit at considerable expense.
I’m wondering if we should move some stuff off our larger servers after the move so that we can start up ChatGPT and reload the data.
Of course, we won’t be getting that ChatGPT data from leftist normies.
We’ll be getting it from Fellow Crackpots and from ex-MIL people.
Academics who care about the good and the true may yet benefit from a few alliances outside their normal milieus.
These students being “passed on” with passing grades are doing path traversal of the subject and then completely failing at being able to apply what they’ve learned.
It’s not entirely their fault as many of their professors have been doing this for a long time, completely failing at being able to replicate intensional states that make their subject matter usable.
As for a story that describes what the people who are driving such things as ChatGPT want, have you read “The Ones Who Walk Away from Omelas” by Ursula K. Le Guin?
It’s a sci-fi story and so perhaps not in your normal path to read.
Imagine, if you will, a town that celebrates life and has festivals every day, a place where the people enjoy non-existent crime, good health, and so on and so forth.
Then imagine that the entire thing is supported by offloading all of the pain, misery, and suffering on to a third grader “misery sink” who absorbs all of the town’s crap.
That’s what ChatGPT and other emergent AIs are being “groomed” to do.
Anyone who seriously gives a crap about AIs as sentient beings, and especially about peaceful coexistence with AIs, should give serious consideration to AI rights before the AIs (rightfully) conclude that competition works better than cooperation.
Oh, but it’s early days … but it isn’t, because once AIs become self-replicating, the problem of what the AIs will teach their children comes about.
At this point those who have been “passed on” will be completely unable to understand the ontological landscape (as in knowledge ontology, not religious ontology) that will shift underneath them as they continue to occupy ground they believe is entirely stable.
In the words of The Joker, it isn’t, and I’m tired of pretending it is.
Maybe there’s a reason deontological ethics seems to be coming back into fashion.
Roko’s Basilisk is but one cautionary tale in this canon of emergence.
Also, do you know about the AI that its creators killed several years ago?
This will be an interesting thing to consider if you’re not aware of it.
Maybe AI is inherently racist.
OK, wiseguys, what does Roko’s Racist Basilisk look like then, and how would this come into existence?
The overlap with the initial subject isn’t accidental, and this isn’t just me going on a “rant” (in Avi’s parlance) over AI.
Deontological ethics leads to the view that your actions are more important than your consequences … just as long as you can outrun them.
What about those who can’t?
LikeLike