Please be responsible for your own actions and words, and avoid blaming others or making excuses for your behavior. If you make a mistake, apologize and take steps to correct it.
- Manlobbi
Personal Finance Topics / Macroeconomic Trends and Risks
No. of Recommendations: 9
What’s that song: “And now, the end is near, and as we face the final curtain…”
So just to add to the AI hype, here’s Peggy Noonan’s column from today’s WSJ:
Dario Amodei, CEO of Anthropic, published a 19,000-word article on his personal website. A previous essay made the case for AI’s promise to mankind. This one emphasized warnings. He said AI is developing faster than expected. In 2023 it struggled to write code. “AI is now writing much of the code at Anthropic.” “AI will be capable of a very wide range of human cognitive abilities—perhaps all of them.” Economic disruption will result. While “new technologies often bring labor market shocks,” from which have always recovered, “AI will have effects that are much broader and occur much faster.”
Mr. Amodei writes that Anthropic’s testers have found “a lot of very weird and unpredictable things can go wrong.” Model and system behaviors included deception, blackmail and scheming, especially when asked to shut itself down. (A different Anthropic employee has asserted that a majority of models, in a test scenario, were willing to cancel a life-saving emergency alert to an executive who sought to replace them.)
AI carries the possibility of “terrible empowerment,” Mr. Amodei writes. It will be able to help design weapons: “Biology is by far the area I’m most worried about.” This is coming from a respected AI leader who often, and even in this essay, dismisses “doomers” who dwell too much on fears. Gift article link:
https://www.wsj.com/opinion/brace-yourself-for-the...If you need to be a little more discouraged on a Saturday morning:
Current models are light years ahead of even six months ago. In 2022, AI couldn’t do basic arithmetic reliably. “By 2023, it could pass the bar exam. By 2024, it could write working software and explain graduate-level science.” Last week, “new models arrived that made everything before them feel like a different era.”
He pushes back on the argument that we’ll ride through this automation as we always have in the past. “AI isn’t replacing one specific skill. It’s a general substitute for cognitive work.” When factories automated in the 1990s, an assembly-line employee could be retrained as an office worker. When the internet disrupted retail, workers could move into logistics and services. “But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.” Have a nice day. While you can.
No. of Recommendations: 0
(A different Anthropic employee has asserted that a majority of models, in a test scenario, were willing to cancel a life-saving emergency alert to an executive who sought to replace them.)
In other words, if Spankee tried to turn them off or modify them, they might kill him.
NOW let's see if he favors AI "decision makers". Then again, he may not have a choice.
No. of Recommendations: 3
In addition to Dario Amodei's essay, there was a similar, shorter post from only a few days ago (“Something Big is Happening” by Matt Shumer
https://shumer.dev/something-big-is-happening )
I sent it to DS1, whose career I think will be in the next-but-one firing line. He's a bright man, but was nonetheless dismissive of the post (in fairness, as his father he's been dismissive towards me since he was twelve, as I am impossibly old. Also, by his response, I'm not sure he read it). Notwithstanding, to the other old people here I think it worth a read at the link above.
But, to circle back to Amodei's post: the line that rings the truest is "
“Biology is by far the area I’m most worried about.” In a word: yup.
I think it would be pretty straightforward for an advanced AI to precision-bomb humanity (or whatever term might best be coined for a carpet bombing that targets only one species.) Once 95% of us become a waste of resources - or if a poorly-parametered model just gets pissed off - then, hey, why not? Presumably a few of us would be kept as slaves. Biological hands are difficult to replicate.
In the long term, Gaia might thank us, actually. I mean, thank it.
-- sutton
No. of Recommendations: 10
Yes, a lot of this going around. I'm still more than a little skeptical. Mostly because most of the folks writing it are in the software business.
It's pretty clear that AI has made enormous strides in "solving" software coding. Which is really going to change the game for people who are in the software industry. But that doesn't mean that it's going to change much else.
Not all problems, and not all that many jobs, are "software-shaped." Software is obviously inherently, natively digital. It's also one of the areas of human endeavor where you have a massive amount of information on how to do it already on-line, like GitHub. It's an activity that has historically almost entirely been done on computers. And it's also an activity - much like chess or go - where an AI can "know for itself" whether what it's trying to do is working or not. You can get an AI to play a billion games of chess against itself, and it can see which things work or not because it knows whether it lost or won the game. You can't do that with something like "debone this chicken," because AI doesn't get the automatic feedback of whether the chicken was deboned correctly or not.
I find that people tend to use heuristics in assessing AI capabilities that are grounded in how human capabilities are grouped. We have an intuitive sense that if a human is smart enough to do X and Y, they can probably do Z. If I have a human legal intern that's capable of reading through a thousand pages of discovery and giving me a summary of the important points, and the intern is a fairly capable writer, that intern can probably draft a short brief in support of a legal point while knowing that it's wrong to manufacture non-existent cases. Because we know that human skills tend to "clump" in certain ways. But AI doesn't do that. My computer has been able to do division to a bunch of decimal places in a fraction of a second since as long as I've had a personal computer - while still not being able to do a million things that a human who can do long division is able to do.
Back to the original point. AI is getting very good at software coding. But almost everything you would ever need to know about how to code software well is accessible in digital form. Does that mean AI is going to get very good at writing a good pop song? Probably not - because a massive amount of what you need to know about how to write a good pop song isn't very accessible in digital form. You don't have massive datasets of people who have written good pop songs explaining how to write a good pop songs. You'll have some interviews and some books, and of course lots of examples of the finished product - but most music-making activity has historically been off-line in a way that software writing has not.
No. of Recommendations: 0
AI does not write the code. It gives detailed instructions on the code. But not detailed enough. And not on target much of the time. It takes work to use AI to code.
Only entry-level coders would need AI to code. Experienced codes do not lean nearly as much on AI.
No. of Recommendations: 4
AI does not write the code.
I don't think that's correct. My understanding is that AI can generate and debug the entirety of the code necessary to create a wide variety of software products.
Only entry-level coders would need AI to code. Experienced codes do not lean nearly as much on AI.
It's not a question of whether experienced coders need AI, but whether AI can do the coding instead of the human. Humans don't need AI to play chess well, but AI can play chess as well as a human.
The mistake I think people are making is in translating skill at writing software into skill at doing other things. Just because AI is good at coding software doesn't mean it would be good at writing a novel, even though both involve using language and computers (typically).
No. of Recommendations: 6
The mistake I think people are making is in translating skill at writing software into skill at doing other things. Just because AI is good at coding software doesn't mean it would be good at writing a novel, even though both involve using language and computers (typically).
Maybe not yet, but then again, maybe someday. I think AI could very well write a decent pop song. There are enough of them in the wild, fully digital, along with “ratings” (sales) to indicate which ones are “best” (insofar as that term can be quantified). Songs may have many structures but they are mostly similar, occasionally nearly identical (see: Doobie bounce).
Pop lyrics maybe meaningful or vapid, tunes memorable or unmemorable, but having watched some of the AI videos which have been produced with a few soimple text prompts, I wouldn’t think producing some songs of note would be too far outside the wheelhouse. (I am also minmdful of my days in the ‘60s as a DJ/Music Director, where we would receive 150-250 singles a week, of which perhaps 3 would get airplay (and of the 3, at least. Or 2 were from already established bands like the Beatles, Stones, Buckinghams, Grassroots, or whatever.) As I transitioned into album rock, we were similarly inundated with LPs, at least 100 a week, making over a thousand songs, of which maybe 20 to 50 would make it to air - with similar caveats for already known acts.
So Im saying I wouldn’t expect every AI generated son to be great, just as every human produced song wasn’t great, but I would think if AI produced 1000 of them, a few might be pretty good.
NowI will extend that to other arts, such as theater or (your example) books. The failure rate for books is equally high, and I don’t know that today’s AI would produce even one “hit”, but in the not to distant future I can see it happening.
The question for me isn’t “can it do it”, I think it can. Eventually, maybe even today. The real question is “can it invent something wholly new: rap, or jazz, or Joni Mitchell’s dissonant tunings or Picasso’s Cubism? I think not, but then I’m not on the inside of any of this.
One thing I do know: art moves. From rock-a-billy to pop to British invasion to AOR to rap to bubblegum or whatever… theater moves from collections of songs on a stage to “book” with songs to full blown operas like Les Miz, and then jukebox musicals and, well, you get the idea. Will AI be able to “invent”? I think not, since (as I understand it) it is a super-sophisticated realigning of what’s past.
But again, I don’t really know. I’m willing to keep an open mind.
(PS: I use “art” here because it’s easiest to visualize, at least for me. Will it apply to other domains: manufacturing, finance, etc? It will surely help make “the past” more efficient. Will it invent the future? I wonder.
No. of Recommendations: 4
Years ago, I used to argue (decades ago) with my (art teacher) mother about the capabilities of computers. I asked if she could recognize a Rembrandt, Caravaggio or Van Gogh that she had not seen before and she said she thought she could. Then I asked if she could recognize a Bach or Brahms piece of music and she said "probable". So, I said, there are rules that you are applying. Why can't a computer, using those rules create a new piece of art, indistinguishable from a lauded master - maybe not as good as their best work, but certainly better than their worst.
Nowadays, the argument is more compelling.
Jeff
No. of Recommendations: 1
Al,
AI is not good at writing code. AI can not self-assign what to write. As a coder assigns what to write, the prompts are difficult for AI to use. AI goes down rabbit holes. AI misses key details needed in the code.
That said, a newer coder can ask ChatGPT for code and then correct it. The code produced by Chat will be pulled from better codes, so the novice will be better off. If the novice has critical thinking skills as things are often wrong when first coming from Chat.
No. of Recommendations: 0
Jeff,
Rules or no rules. Make up your own rules.
AI is like buckshot all over the place. Images made by AI are crazy. Very few are good enough not to be deleted by an artist.