ADVERTISEMENT

Alleviating the grind of being poor

You're making flat statements about what counts as consciousness/sentience. These are open questions. You might not be wrong, but I think you should be a bit more humble about your certainty.

Where do you think human's thinking capabilities come from? Scientists are going to use a materialist framework to answer that--that thinking comes from physical processes that go on in the brain. If that's true, then it is conceivable that thinking can be manufactured in a computer. Are you on board with that analysis or do you believe that thinking originates from something else?

On a related but not exactly congruous note, do you believe humans have free will or that we are just as bound by the laws of nature and determinism as other things?
Brad, in reading your post I recalled a PBS series about the brain on the Charlie Rose Show some 14 years ago.

The panel of experts constantly reminded the viewers about how little we know about how our brain functions and how much is yet to be discovered. The same can be said about AI.

I have no doubt, Brad, your statement about "thinking can be manufactured in a computer" is a sure thing. On top of this, providing information required for AI to out think any human who ever existed, or could exist, is a given.
 
It was well played, yes, but all he did was feed the AI some bullshit and the AI spit out some bullshit in response.

Lawyers and counselors and therapists should be worried.

I added a couple of subtle Easter eggs in there . . .

Everybody should be.

Anybody who's worked in a high paced production type environment understands the concept of exception reporting/management. Brad adding a couple thoughts will be the norm for the human component of a great many processes in the coming years.

Exception managers. That's all most of us will ever hope to be.
 
  • Like
Reactions: anon_6hv78pr714xta
You're making flat statements about what counts as consciousness/sentience. These are open questions. You might not be wrong, but I think you should be a bit more humble about your certainty.

Where do you think human's thinking capabilities come from? Scientists are going to use a materialist framework to answer that--that thinking comes from physical processes that go on in the brain. If that's true, then it is conceivable that thinking can be manufactured in a computer. Are you on board with that analysis or do you believe that thinking originates from something else?

On a related but not exactly congruous note, do you believe humans have free will or that we are just as bound by the laws of nature and determinism as other things?

There is a great question here, there is some evidence humans make their choice long before they know they did.


The article below discusses it in terms of free will. The issue becomes, does the simple act of choosing a hand adequately reflect free will, or is choosing a house to buy far more complicated and doesn't follow the same pattern?


I have seen other studies that suggest we make our choices then the brain looks for rationalities to explain the choice and not the other way. Your brain chooses to drink water over soda, then the brain looks for a reason why it made that choice. If it looks for reasons after the decision, what makes us think there is a valid reason? This concept has me concerned about our political discussions. If we all are making decisions and then simply validating them in our minds, what value are argumentation and debate?

Add in what does it mean for free will if retrocausality extends from quantum into the standard model? If I am rescued from a shipwreck at 5PM, then I have to board the boat at noon.
 
This concept has me concerned about our political discussions. If we all are making decisions and then simply validating them in our minds, what value are argumentation and debate?
Not directly on point, but a really well reasoned and argued position on how people vote and why democracy doesn't actually provide what we think we want it to:

 
Anybody who's worked in a high paced production type environment understands the concept of exception reporting/management. Brad adding a couple thoughts will be the norm for the human component of a great many processes in the coming years.

Exception managers. That's all most of us will ever hope to be.
That, and figuring out the quickest, best queries or prompts to get what you want.
 
  • Like
Reactions: Cortez88 and larsIU
Thanks, I have ordered it on my Kindle.
The other one you might want to check out is one I've recommended a lot on here:


Haidt covers a lot of the social psychology research about decision making, rationalizations, etc. He uses the metaphor of an elephant and the rider, with the elephant being your unconscious emotions, desires, etc. driving your decisions and the rider acting as the after-the-fact lawyer coming up with reasons why the elephant chose a particular direction and, occasionally but with great effort, redirecting it.

That book more than any other changed how I approach talking to people about politics and, in many ways, life.
 
You dramatically underappreciate what AI is now, or will very soon be, capable of.


Neither of your links addresses my point. Both are still variations on the theme of extrapolating from what exists.

This is why I brought up E=MC squared and Einsteins thoughts about imagination.

Ask AI what the universe is and you will always get variations on what humans think.
 
Neither of your links addresses my point. Both are still variations on the theme of extrapolating from what exists.

This is why I brought up E=MC squared and Einsteins thoughts about imagination.

Ask AI what the universe is and you will always get variations on what humans think.
I love how after the Ukraine accounting debacle, you just doubled back down on being an expert on anything. It's like you're immune to humility.
 
There is a great question here, there is some evidence humans make their choice long before they know they did.


The article below discusses it in terms of free will. The issue becomes, does the simple act of choosing a hand adequately reflect free will, or is choosing a house to buy far more complicated and doesn't follow the same pattern?


I have seen other studies that suggest we make our choices then the brain looks for rationalities to explain the choice and not the other way. Your brain chooses to drink water over soda, then the brain looks for a reason why it made that choice. If it looks for reasons after the decision, what makes us think there is a valid reason? This concept has me concerned about our political discussions. If we all are making decisions and then simply validating them in our minds, what value are argumentation and debate?

Add in what does it mean for free will if retrocausality extends from quantum into the standard model? If I am rescued from a shipwreck at 5PM, then I have to board the boat at noon.

MtM, toss in addictions when considering our brain finds ways to confirm emotionally driven actions which can include bodily pleasures.

Pleasures which actually may be harmful to our mental and physical health.

Thus addictions such as drugs, alcohol, and sex can take charge of our pleasure centers to the point where destructive behavior is rationalized by our thinking.

In other words, emotions and pleasures cancel out the decision making process we call free will. To this, add in the possibility we are born to be prone to some addictions.

Write these comments while hoping I am completely wrong while also feeling lucky.
 
  • Like
Reactions: Marvin the Martian
I love how after the Ukraine accounting debacle, you just doubled back down on being an expert on anything. It's like you're immune to humility.
Well, you gotta do better than condescending personal attacks for openers

You never explained how a single use weapon depreciates. Corollary: Government spending in Ukraine starts from a position of spin, concealment, and deception.

You never explained what’s wrong with mass/energy in the universe is fixed

You haven’t explained how AI can deal with total abstractions
 
You haven’t explained how AI can deal with total abstractions

What do you do that is a total abstraction?

Look at a great invention, the plane. Humans dreamed of a flying machine for a long time. Birds do it, we didn't make it up whole cloth. Thousands of trial and error attempts later someone got it. I don't see anything there an AI could not have done.
 
What do you do that is a total abstraction?

Look at a great invention, the plane. Humans dreamed of a flying machine for a long time. Birds do it, we didn't make it up whole cloth. Thousands of trial and error attempts later someone got it. I don't see anything there an AI could not have done.
It wouldn't have killed a bunch of humans during testing. So, there's that.
 
  • Like
Reactions: Marvin the Martian
This, of course, is not a courtroom.
"What you are about to witness is real. The participants are not actors. They are the actual people who have already either filed suit or been served a summons to appear in a California Municipal Court. Both parties in the suit have agreed to dismiss their court cases, and have their disputes settled here...in our forum...The Water Cooler."
 
  • Like
  • Love
Reactions: hoot1 and UncleMark
You never explained how a single use weapon depreciates.

They become functionally obsolete. We make a weapon designed to defeat a Russian tank. Russia puts new armor on the tank, so new versions of the weapon have a bigger/better charge.

Prime example, our WW2 bazookas were worthless against T-34s in Korea. We had newer ones coming out. The WW2 bazookas were no longer worth what we paid for them because they could not do their job we wanted them to do.

The argument seems to be that this is how we have accounted for, well, ever.
 
They become functionally obsolete. We make a weapon designed to defeat a Russian tank. Russia puts new armor on the tank, so new versions of the weapon have a bigger/better charge.

Prime example, our WW2 bazookas were worthless against T-34s in Korea. We had newer ones coming out. The WW2 bazookas were no longer worth what we paid for them because they could not do their job we wanted them to do.

The argument seems to be that this is how we have accounted for, well, ever.
I understand functional obsolescence but I don’t think that explains the do-called mistake or the explanation for correcting it. Besides, the javelins we gave them were pretty effective.
 
They become functionally obsolete. We make a weapon designed to defeat a Russian tank. Russia puts new armor on the tank, so new versions of the weapon have a bigger/better charge.

Prime example, our WW2 bazookas were worthless against T-34s in Korea. We had newer ones coming out. The WW2 bazookas were no longer worth what we paid for them because they could not do their job we wanted them to do.

The argument seems to be that this is how we have accounted for, well, ever.
The M72 LAW was developed in 1959 and was in active use in 1963 and still in use today. It is a light anti tank weapon. They were effective in Vietnam for assaults on bunkers.

Some weapons may be functionally obsolete but can be repurposed for other uses. Just ask the marines about the Baretta M-9. They make great paperweights.

Just for an example.
 
I understand functional obsolescence but I don’t think that explains the do-called mistake or the explanation for correcting it. Besides, the javelins we gave them were pretty effective.

Do you have any evidence that this is not how we have done weapons in the past? Even Stoll seemed convinced this is the way, and he is an accountant. What do you know that he doesn't? It seems you are complaining that this administration is doing what every administration has done in every military aid package in US history.

Second, I believe we have Javelin's A-F. I think G is starting to be manufactured. I don't know how many of each we gave Ukraine. But the purpose of a Javelin is to destroy a Russian tank. I would say the ones we have given them have given us our money's worth over any that are sitting in a warehouse.
 
What do you do that is a total abstraction?

Look at a great invention, the plane. Humans dreamed of a flying machine for a long time. Birds do it, we didn't make it up whole cloth. Thousands of trial and error attempts later someone got it. I don't see anything there an AI could not have done.
I understand my stoker who is abstract random. AI can never follow our conversations

Seriously, do you deny that everything AI does requires a rule? A rule given by a human? Can AI tell me why there is something instead of nothing?
 
Can AI tell me why there is something instead of nothing?

I gather you are discussing poetry or religion, as things like science can't answer that sort of why. We can't scientifically explain why the universe exists, it isn't something science can even attempt. We have a difficult enough time on how. I don't know that this is a problem.

AI is in its infancy. Its abilities will grow, especially as quantum computing comes online. Right now we hamstring AI, the corporations don't want to have it say things that may scare off investors or advertisers. So if asked the question "What is the meaning of life" it is programmed to give the "some say, others say" answer. That safety rail won't always be there. There is no reason a computer can't compare various origin stories and come up with one that it believes is at least minimally more likely. I don't see a reason they can't take parts of stories and integrate into a new one.

Once it gets quantum abilities, and the guardrails come down, I think we'll see huge moves toward AGI. AI can right now write stories/poems, paint, write music. It is only going to get better.
 
I gather you are discussing poetry or religion, as things like science can't answer that sort of why. We can't scientifically explain why the universe exists, it isn't something science can even attempt. We have a difficult enough time on how. I don't know that this is a problem.

AI is in its infancy. Its abilities will grow, especially as quantum computing comes online. Right now we hamstring AI, the corporations don't want to have it say things that may scare off investors or advertisers. So if asked the question "What is the meaning of life" it is programmed to give the "some say, others say" answer. That safety rail won't always be there. There is no reason a computer can't compare various origin stories and come up with one that it believes is at least minimally more likely. I don't see a reason they can't take parts of stories and integrate into a new one.

Once it gets quantum abilities, and the guardrails come down, I think we'll see huge moves toward AGI. AI can right now write stories/poems, paint, write music. It is only going to get better.
Eh. Everything AI will ever do is crunch data. I agree it will crunch more data and do it faster as tech improves. But don’t we reach a point of diminishing returns? Under the jelly bean theory, the 100,000,000th guess doesn’t add much.
 
That safety rail won't always be there. There is no reason a computer can't compare various origin stories and come up with one that it believes is at least minimally more likely.
More thoughts about AI.

Do you really believecAI can believe? I think it just crunches data and gives back probabilities. That includes word strings made to look like original prose, or dots on a canvas made to look like art. But I don’t see that is creative. It will always be applying existing data to rules.

I like that AI can take data from a pitched baseball and identify the pitch. AI can also call balls and strikes. Should it? Im sure AI can say what is the optimal pitch to throw in any given circumstance. Should it? Same question for the optimal football play.

AI can fly airplanes and run locomotives. Should we allow that.

If AI is dangerous, I think it’s because of humans, not because of AI. How do we address the human element?

I also kick around the idea of whether AI can engage in critical thinking or a Socratic exchange. Can it? ( Those who maintain critical thinking is to analyze data to discover truth need not respond).

Finally, can AI answer any question I’ve asked here?
 
When you have set naked, in a hotel room faux leather chair, eating a day old salad, with your fingers. posting one the cooler...
Then you can talk about poor.
I'm getting up now... oppps, I'm stuck to the chair.
 
Good find. I suspect though we will hit the usual chicken/egg debate. Some people want to believe there is a just world and thus people are poor due to something they can change. They do not want to believe being poor increases the probability of people developing said bad habits due to stress.

Being poor is stressful, can that be disputed? Stress is hard on the human body and spirit, can that be disputed?
A lot of well-paying jobs also breed stress
 
  • Like
Reactions: larsIU and stollcpa
Neither of your links addresses my point. Both are still variations on the theme of extrapolating from what exists.

This is why I brought up E=MC squared and Einsteins thoughts about imagination.

Ask AI what the universe is and you will always get variations on what humans think.
That may all be true but for the sake of most common jobs, it doesn't matter. We know Einstein's name because he was one of our great minds. He came up with things that were viewed as extraordinary leaps forward. We will still have those types of people in the world but they represent an extreme minority. To me arguing about then isn't necessarily where the AI argument is at.

You would probably argue that law is an art form. I think nobody wants to feel like they are replaceable. I am an accountant, we both went and did an extra amount of schooling to be able to do what we currently do. However, most of the mundane tasks associated with both jobs would not require a person if you have a computer that can crank out data to others when prompted (or it requires far, far less people).

"Lawbot 5000, would we be able to acquire that piece of private property to expand a roadway and how would we go about doing so?"

------Super Computer whirring noises----

"Here are the common externalities you would need to take into consideration: X, Y, Z. Do any of these fit your situation?"

"Externality X and Y."

--More Super Computer whirring noises-

"Based on Boulder v Goat and Fort Collins v. CoH you should be able to purchase the property. However, based on Atlanta v. Sope, you must also be aware that externality W will sometimes apply and that if so course of action B or C may be necessary."

That can all be programmed. You need the humans with the initial knowledge to set up the thought processes but if the computers can learn on their own, you maybe need two good lawyers and the rest is computing. Same with estate planning or drawing up divorce papers or etc., etc.


That puts everyone but the extreme minority of "greats" out of a job. That is civilization changing (negatively in my opinion.)
 
Last edited:
That’s not how AI became good at chess. AI beat Kasparov with humans giving it K’s tendencies and AI doing the calculations. But the faster and more data capability the AI machine has, the less it needs to know tendencies.

All AI really is is a data cruncher. Faster and faster with more and more data. But it takes a human to instruct how to apply the data. It can’t make a rule unless it is told what kind of rule. AI plays diplomacy by applying word patterns. If you ask AI open ended questions, you get platitudes reflecting its programmers. Ask AI “is Trump a racist” you will get familiar patterned words we have heard a gazillion times. In fairness to AI, though, many humans will answer in the same way, and probably not with as much elegance. So AI will appear to out perform some humans.
Deep Blue used custom hardware to generate moves and mini-max game theory to evaluate them. I think it’s more accurate to say Deep Blue overwhelmed Kasperov.
 
  • Like
Reactions: larsIU
Do you really believecAI can believe?

Define believe. If it is accepting something as true with a lack of direct evidence to prove it, yes, a computer can believe. No reason a computer can't be programmed to accept what it deems most probable as fact. Heck, to be more human they can be programmed to believe that even after conclusive proof exists they are wrong. I suspect we can program them to be greedy, jealous, racist, and all the other bad aspects of humanity.
 
  • Like
Reactions: i'vegotwinners
Deep Blue used custom hardware to generate moves and mini-max game theory to evaluate them. I think it’s more accurate to say Deep Blue overwhelmed Kasperov.
Kasparov probably says the same thing, now anyways.
 
That may all be true but for the sake of most common jobs, it doesn't matter. We know Einstein's name because he was one of our great minds. He came up with things that were viewed as extraordinary leaps forward. We will still have those types of people in the world but they represent an extreme minority. To me arguing about then isn't necessarily where the AI argument is at.

You would probably argue that law is an art form. I think nobody wants to feel like they are replaceable. I am an accountant, we both went and did an extra amount of schooling to be able to do what we currently do. However, most of the mundane tasks associated with both jobs would not require a person if you have a computer that can crank out data to others when prompted (or it requires far, far less people).

"Lawbot 5000, would we be able to acquire that piece of private property to expand a roadway and how would we go about doing so?"

------Super Computer whirring noises----

"Here are the common externalities you would need to take into consideration: X, Y, Z. Do any of these fit your situation?"

"Externality X and Y."

--More Super Computer whirring noises-

"Based on Boulder v Goat and Fort Collins v. CoH you should be able to purchase the property. However, based on Atlanta v. Sope, you must also be aware that externality W will sometimes apply and that if so course of action B or C may be necessary."

That can all be programmed. You need the humans with the initial knowledge to set up the thought processes but if the computers can learn on their own, you maybe need two good lawyers and the rest is computing. Same with estate planning or drawing up divorce papers or etc., etc.


That puts everyone but the extreme minority of "greats" out of a job. That is civilization changing (negatively in my opinion.)
This.
 
  • Like
Reactions: hoosboot
That may all be true but for the sake of most common jobs, it doesn't matter. We know Einstein's name because he was one of our great minds. He came up with things that were viewed as extraordinary leaps forward. We will still have those types of people in the world but they represent an extreme minority. To me arguing about then isn't necessarily where the AI argument is at.

You would probably argue that law is an art form. I think nobody wants to feel like they are replaceable. I am an accountant, we both went and did an extra amount of schooling to be able to do what we currently do. However, most of the mundane tasks associated with both jobs would not require a person if you have a computer that can crank out data to others when prompted (or it requires far, far less people).

"Lawbot 5000, would we be able to acquire that piece of private property to expand a roadway and how would we go about doing so?"

------Super Computer whirring noises----

"Here are the common externalities you would need to take into consideration: X, Y, Z. Do any of these fit your situation?"

"Externality X and Y."

--More Super Computer whirring noises-

"Based on Boulder v Goat and Fort Collins v. CoH you should be able to purchase the property. However, based on Atlanta v. Sope, you must also be aware that externality W will sometimes apply and that if so course of action B or C may be necessary."

That can all be programmed. You need the humans with the initial knowledge to set up the thought processes but if the computers can learn on their own, you maybe need two good lawyers and the rest is computing. Same with estate planning or drawing up divorce papers or etc., etc.


That puts everyone but the extreme minority of "greats" out of a job. That is civilization changing (negatively in my opinion.)
I absolutely think practicing law is an art— and also a science.

If a legal task can be programmed, it doesn't require a lawyer to do the task in the first place. I’ve trained paralegals in how to do many things from what to look for in certain kinds of contacts, to write simple briefs, to do routine discovery. All of that is the same as AI. I’ve used AI for years in various ways.

But here is the problem with AI in many fields, not just law. Law has a tendency to become steeped in the way things have always been done. If we are not careful AI will make the ruts wider and deeper. That is not a good thing. It stops evolution.

I think AI will make good lawyers better, and will make lazy lawyers lazier.

Once during an orientation session for new public official clients I was explaining the nuts and bolts of what I did as their attorney. During the session one official asked me a general question along the lines of what value did we add to the organization. The question came from the point of view that lawyers tended to be obstructionists. I answered that I bring my judgment, my professionalism, my independence, and my skills to the table. AI can never do that. But AI can help me do it better.
 
ADVERTISEMENT
ADVERTISEMENT