I'm not convinced I understand ethics.
Deontological systems derive from ontologies, rule sets, heuristics that can adjust. Right? What gives the rules justification? I had it stereotyped that Divine Command Theory was the solution, and I suppose it is a solution. But in "painful choice" scenerios rules usually break down, like the trolly problem. So where do they get relative weight? Right?
Consequentialism is good at creating rule sets looking at numbers. To me it has a weird, very intimate relationship with epistemology, or I guess methodology and the study of methodologies, if I didn't misunderstand that word. Consequentialist systems often have some very fundamental problems of what values to pursue methodologies aside. So like, do you select for harm reduction, or do you select for freedom? I see them as interrelated, but I guess anyone who thinks sjws are orwellian things begs to differ. There's also an issue of quantifying anything it's using as a measure of good. Right?
As an aside, the two often butt heads about whether or not the ends justify the means. Deontologies don't tolerate their rules being broken, often even when other rules might be broken, and their relationship to painful choices sometimes results in real people seemingly redefining good and evil to get out of it (does inaction count as good or evil? Is evil thompsonian (mere absence of good)? what? Meanwhile, Consequentialism is over there euthanizing kittens and castrating boys, and raising inequity to provoke revolutions in the name of 'accelerationism' since it thinks it can immanentize the eschaton (make tangible the end times) and bring us utopia that way. But god damn if it doesn't have good methods and ways to update its methodology (bayesian reasoning). But that's not to say deontology can't bite bullets - Divine Command Theory says to stone homosexuals, and if your categorical imperative (~act only according to that maxim, by which you can at the same time desire that it should become a universal law) is imperfect (which isn't even bad necessarily) you can find yourself, say, valuing ideals of 'freedom' over 'happiness' or vice versa (*cough*). It's not to say absolutely that a particular categorical imperative is wrong; but it isn't necessarily free of bullets to be bitten
Then there's Aretic or virtue ethics, which despite my collection of Nietzsche I think I might understand least. But there's a cultural reason - this used to be the mark of the nearly dead breed of paleoconservatives, consequentialism for the leftists and the libertarians, and deontology for the religious, whether puritan (the liberals in the US) or not (the evangelicals, reformed, and the neoconservatives). But as I understand it, there's a platonic ideal of a man (or at least, what you want to be), and how to live ideally involves living according to those ideals. So it's necessarily idealist; the others probably don't have those obligations overtly but I don't really know. I don't really know that it cares, per se, about painful choices, but with regards to results and intentions (which while emblematic of consequentialism and deontology respectively doesn't mean that the other can't be concerned with them) often opens the way to reasoning that bad eithers are the result of bad virtues.
Which opens up the question, I think, about the just world fallacy. So what provoked this meditation was someone on facebook reasoning that
Most people aren't intuitive deontologists or intuitive consequentialists. They're intuitive just-world-ists. Picking intuitively good/noble/righteous actions always has intuitively good outcomes.
If you try to propose a real or imagined scenario in which deontology and consequentialism come into conflict, people's first instinct isn't to endorse deontology or endorse consequentialism; it's to do a search for reasons why you're lying and the good/noble/righteous-seeming action really does produce the best outcome.
And on the other hand, the virtue ethicists often will look back and do this very sort of thing. There was a case a few years back in Michigan, that I heard about from the reasonable doubts podcast, wherein a conservative, christian, philanthropist baseball star managed to get away with serial rape for years because nobody took accusations against him seriously. Afterall, he did good things, he received good things (a modest mlb career, money, health, etc). obviously he was living a good life? But the main thing that pissed me off and why I was allergic to deontology so long is that most of his actions were framed from divine command theory and biblidolotry.
A very marginal system was something that tried to be scientific. Pragmatism, I guess it was called. And basically it had the idea of doing whatever, and if things feel bad/end up bad later, we'll just make a list of actions and update their good/badness according to statistical methods. It was... not widely taken up.
To me, it seems like the three main systems regardless of their formulae result in a unique sort of justificatory regress. You have methods, but why use the methods? Because of principles. You have principles, but why have principles? because of virtues. You have virtues, but why have those virtues? Their correlation with consequences. So you get this trefoil gordian knot of justificatory regression. There might be implication - if they're part of one system, they should inherit each other's paradoxes at some level. But the circular justification itself is a kind of fallacy.
There's a way to sever the gordian knot. There is with anything. I guess there's also the position of moral nihilism, which is scary, for everyone, for me. Moral questions ask if a ~thing is good or bad, so moral nihilism would say there isn't anything good or bad (at least in a way these theories are concerned with).
Alexander's blade, I guess, is the justification for moral tendencies in absence of morality itself being a real thing embedded in the universe somewhere. And I think I got it - evolution. Moral norms (in practice) might have a justificatory problem in metaethics, but in general practice every normative ethical theory has the same fundamental principle underlying it - maximizing survival - and this probably isn't an accident. After all, normative theories that don't maximize survival don't survive as often as those that do do. Most of the great moral teachings - love thy neighbor, take care of the weak, don't go out of your way to fuck up the world - are almost trivially parts of (egoistic) altruism. Don't be hostile to group is a good heuristic when you've evolved dependency on group to survive. Take care of the weak is the essense of K-selection, or selecting for holding capacity in your ecosystem. Don't steal can mean don't take food from the hungry. Don't murder lowers your species genetic pool, wastes a lot of resources, and so on. So morality isn't a thing to fetishize - like you're not going to find it embedded in the function of the universe, it's not floating out there somewhere with the platonic solids. But you are going to get some statistics dealing with iterated repetition, variance, and natural selection that can make something like that. And make something opposite of that, as well as something symmetrical to that (manifest in, say, truely solitary animals). It's just an emergent accident.
There's apparently a consensus that 'the enlightenment attempt to derive morals from reason failed'. Maybe the trefoil is why.