U.S. District Judge Henry Wingate’s July 20 order granted a request for a temporary restraining order (TRO) from several education advocacy groups – such as the Mississippi Association of Educators – barring the state government from enacting several provisions in a bill aimed at eradicating diversity, equity, and inclusion (DEI) programs.
But the order, which has since been corrected, apparently contained multiple errors, which the defendants noted in an unopposed motion to clarify on Monday.
‘The TRO Order identifies incorrect plaintiffs and defendants; recites allegations that do not appear in the operative complaint and/or are not supported by record evidence; identifies, as quoted excerpts, certain terms that do not appear in the language of [the state bill]; and relies upon the purported declaration testimony of four individuals whose declarations do not appear in the record for this case,’ the motion to clarify reads.
Go here to read the rest of the story. Wingate has been a Federal district court judge for four decades and is 78. My guess is that his clerk does most of the actual judging. The “corrected” opinion refers to a non-existent case. This is an example of why lifetime appointments to the Federal bench is a very bad idea, and that the misuse of artificial intelligence is going to be a major problem going forward.
“Artificial Intelligence” is a marketing phrase.
These are really large language computer models. “AI” gives the illusion of being “AI”. No more.
It’s like saying a 10 ton Jack is a strong man.
Oh dear. What a lazy sod. The problem is trying to prove his office used some AI program to make up the case. Once again, technology opening Pandora’s box to a plethora of problems before thinking about the consequences.
AI itself, whether Meta Llama AI, Chat GTP, Grok, etc., admits to the following AI risk factors. Note the one on over-reliance and loss of human skills. Note also that AI hallucination (e.g., an AI model generates information that is inaccurate, false, nonsensical, or misleading, even though it may appear plausible and coherent) is not mentioned. Now you know why the AI technology control plan that I wrote is 65 pages long, and that a series of implementing procedures is sure to follow. My fear, however, is that what I write will become mere words in the wind because it’s easier to press a button and get a response that to independently verify each response by alternate means (which the plan that I wrote requires for anything affecting safety, quality, or regulatory compliance).
1. Algorithmic bias
2. Privacy violations and data security threats
3. Misinformation and manipulation
4. Job displacement
5. Lack of transparency and explainability
6. Autonomous weapons systems
7. Socioeconomic inequality
8. Over-reliance and loss of human skills
9. Environmental impacts
10. Accountability and liability
Sounds like the case was predetermined, the evidence and law were not considered.
“..appointments to the Federal bench is a very bad idea”
Probably the best way to deal with this is to do something similar to what the Vatican does. Make them submit their resignations when they hit a certain age, although I’d suggest 65 not 75.
Some of us can maintain mental acuity past 65 but not all of us. Removal isn’t in option because impeachment isn’t a viable alternative. And these decisions affect people’s lives without any sort of reasonable ability to change the wrong decision.
Mea sententia peculiaris sicut civis liber.
My personal opinion as a free citizen.
I will continue with my previous comment. A formal risk analysis is required when using AI for any task involving safety, quality, or regulatory compliance (especially when used in the Legal profession). This risk analysis must compare likelihood (probability) of adverse event with consequence (or severity) of such an event. High likelihood, high consequence events are high risk events that require mitigation measures and independent verification of AI results by an alternate method. Low likelihood, low consequence events are low risk events that can be tolerated. In nuclear energy (as in aerospace, petrochemical, etc.), this is a very mathematical and dispassionate process. Do the freaking process!
So I came up with a way of measuring severity level using Isaac Asimov’s Three Laws of Robotics (the actual text is somewhat different than the text below):
High Severity – First Law: AI may not harm a person or through inaction allow harm to result.
Medium Severity – Second Law: AI must obey a person except when to do so would violate the first law.
Low Severity – Third Law: AI must protect its existence (to protect the company’s financial investment) except when to do so would violate either second or first laws.
Then I came up with criteria for likelihood of adverse event in a 2 year period (2 years being normal reactor refueling outage schedule):
Very High: > 60% probability
High: 40 – 60 % probability
Medium: 10 – 40% probability
Low: < 10% probability
Then I listed all the AI risk factors (see my previous comment) in a left hand column in a table where the AI development engineer can checkoff severity level and likelihood 9in the top row), and from both of those, quantify risk.
You folks who are engineers are very familiar with this process. It’s been around for at least a half century if not longer in industries requiring formal risk analysis. Now there are lots of details that I am not sharing, and won’t share even if you ask because you don’t need to know such details. The bottom line is this:
If you want to use AI for any official business, then you must do a formal risk analysis that includes looking at risk factors like transparency, bias, data privacy, job displacement, over-reliance on technology, etc.
When I explained this to our Operational Technology (digital I&C) guys, they were all in favor of what I had done because we all know what it is like to sit in front of the Green Table with the Bureau of Naval Reactors or the US NRC and explain our screwup. But Legal didn’t like it (because it’s too cumbersome and rigorous), and Donald’s post is an indicator of the attitude of today’s lawyers.
AI can be a wonderful and helpful tool. I have been using more and more. But there must be formal guardrails in place, and you can’t let yourself get lazy. You must always second check what AI tells you, because sometimes it really makes truly dumb a$$ mistakes (like when translating Latin –> 98% of the time it gets the translation right, but the other 2% is a really dumb idiot doozy!).
It is an old problem but with a digital accelerant.
The problem is misplaced trust.
Assuming our host is correct, the judge trusted his clerk to get it right and the clerk trusted AI to get it right. Trust, not verified.
AI, even if it doesn’t go into Terminator mode, will be given unverified trust increasingly often, to our net detriment. Coupled with the capacity for nearly undetectable deception, AI will be more of a bane than a boon.
Perceptions of AI range from the benign (Star Trek) to the malevolent (HAL 2001) to the downright frightful (Maximum Overdrive, “We made you!). At base, the somewhat verbose comments of LQC have the premise that we put into AI what comes out in what manner we choose to use it. Be assured that humanity will follow the instincts that have been present since its beginning and use AI for benefit or harm as it has used every invention. Difference is that this one will have more far reaching effects and the possibility of negative consequences much more severe or beneficial, depending on the desired outcome.
“Perceptions of AI range from the benign (Star Trek) to the malevolent (HAL 2001) to the downright frightful …”
My regards to Captain Dunsel 😉
Donald Link is correct: I am verbose. Guidelines on and risk analysis for AI can not be reduced to a sound bite and a meme. Sadly, too many people have lost critical thinking skills and suffer from attention span deficit. Anything more complicated than 1+1=2 is beyond their capability. Example in point: a lawyer asked me where a certain regulatory requirement was implemented in regard to AI. I responded, “In the section that you just read – right here.” These people can not comprehend what they read in nuclear plans and procedures, yet they work in nuclear energy and they’re supposed to be smart because they got college degrees! God help us all! The problem isn’t AI. The problem is that we are done educated into imbecility by godless liberal progressivism. I thank God that my training and education was in the Engineroom Forward bilge on a 688 class nuclear fast attack submarine under the North Atlantic and not in the great halls of liberal progressive Academia.
Again, my personal opinion.