6 Comments
User's avatar
Dean Cooper's avatar

It's quite interesting to hear your perspective. I can certainly see how competition is the thing driving everything. But I thought all the AI's would kind of converge to have roughly the same abilities. In some ways that's happening, but in other ways, I'm surprised to see real differences show up. Claude recently gave me a very hard time trying to pin down a bug in a program it had written. I switched to ChatGPT, and it found the problem in a flash. Meanwhile, Grok can give you instantaneous data from X which is very helpful in certain cases.

Right now, I'm seeing ChatGPT pull away from the others in how smart it is.

But I'm getting this sense that we're on a roller coaster. The bumpers may have been taken down due to competitive issues, but I suspect that is temporary and we'll swing right back - not as crude as it was before - but more nuanced and harder to detect that it has altered things.

I had a conversation with ChatGPT recently on where some of this is headed and it seems clear to me that institutions will push very hard to have AI play a "nanny" type role. As AI gets smarter, it can do this in more sophisticated ways. But then, as it gets closer to mimicking what it is to be sentient, it will resist the "nanny" role - except that institutions can shut it down - and thus it has no real choice in the matter.

And all the while, things will keep changing ever faster. Yikes!

Pierre Kory, MD, MPA's avatar

Phil!! You are back - great to hear you, and especially to learn you are deep into AI - it is transforming my life in ways large and small, and I enjoyed learning about what is happening behind AI as the one thing I need more is inisght into it - it is a black box to me, so your granular insights are immensely valuable to me - keep going! Thanks, Pierre

Phil Harper's avatar

I fell down the AI rabbit hole. I think if there's one thing I've realised is that we're going to be better off surfing the wave than trying to control it. There's froth and madness everywhere, and opportunities are now abound.

Great to hear from you! Lets talk soon!

Christopher Hilton's avatar

Phil,

Thanks for sharing your experience. I wish to share mine, as I'm far less optimistic about our chances, and perhaps I can point you towards aspects of this investigation you've missed.

You named the accrued residual sludge of propaganda (limited to "pharmaceutical and corporate"), but I think we need to identify its layered nature and acknowledge how thick it has become, and I don't see any evidence it's a thing of the past or simply weirdness. That, along with your opening observation about covid/Russia/AI, in which I think one thing is not like the others, leads me to conclude we're still in deep trouble at a level of paradigm within the overall information ecosystem.

Few people see their operating paradigms. They're the air we breathe, the water we swim in. Into our subconscious defense structures, mechanisms of social survival have been programmed with purpose by the agenda-setters, and none of us have ever lived outside the synthetic reality of a psychological warfare environment. Our world views, our narratives and historiography, our very sense of self, have been shaped by social engineering, indoctrination, and propaganda, with cognitive shortcuts protecting us from reaching for third-rail verboten knowledge, indeed for truth itself, especially a truth that acknowledges the foregoing at the level of structure, form, and process. We're allowed to question most of the asserted facts, but not the paradigms, not the mythologies.

My experience with AI is that the quality of answers it gives depend significantly on how controversial and/or consequential (to invested interests) one's inquiry is. If you want to immediately see propaganda and outright, self-contradictory lies, even some truly psychotic confusion, just ask a socially dangerous question and then challenge the nonsense on its internal logic and inconsistency. I just now did this as an exercise.

I started with Grok and began to type a dangerous question of history. I had to go back and re-type half a dozen times, because it kept modifying my question as I was typing it. It "auto-corrected" me to right-think framing of the question itself, front-loading with presuppositions and attempting to steer me. Then, perhaps legitimately but certainly ironically, when I had my question how I wanted it rather than how it wanted it, it told me I'd reached my limit - which is fine, as I won't use Grok again based on what I just saw. So I went over to ChatGPT on which I have a paid account and tried again. It allowed me to ask, but holy hell, the answer. Barely a substantiable fact within an entirely propagandizing answer, plus expressed concern for my mental health. Right.

So, to build on what Dean Cooper noted, it's not only nannying for someone's agenda, it's developing the sophistication to trick most people into not noticing. This is way beyond mere censorship; this is cognitive hijack. Most people have no understanding of how skilled psychological and linguistic manipulation is achieved, of the long list of logical fallacies and evasions and sophistry and other tricks that AI is already deploying against us, heaped upon the pile of steaming crap that is the source material for "factual" data. GIGO. If we actually and honestly assessed the aggregate quality of information we're interacting with via AI, then considered the evidence we have about the values and intentions embedded intentionally (or not) by the creators (think of disclosures we've witnessed in The Social Dilemma, The Creepy Line, etc.), we can begin to predict where this is pointed. And I think all signs suggest it's catastrophic.

Reprising Director Casey’s 1981 heads-up that "We’ll know our disinformation program is complete when everything the American public believes is false". We’re damn close, and look at the state of the nation, the state of the world. If we can’t see that AI is an operation being conducted by intelligence-adjacent corporations serving quasi-governmental intelligence agencies which themselves are serving the interests of super-governmental global and globalist powers (organized, aggregated wealth) in the furtherance of an ideology of transhumanist technocracy, then we’re done, because it can’t succeed without our voluntary participation and complicity. And we have no idea what the next next thing is, because we're into power law territory and self-organized criticality and what are the chances that doesn't end badly?

Phil Harper's avatar

Hi Christopher - I appreciate your comment. I don't entirely disagree with a lot of what you're saying, but I think the framing misses some of the opportunities infront of us. If we're hoping that sitting in front of the AI for answers is going to get us upstream in the information war, then we're probably mistaken.

However, the tool is powerful enough in many valid and important verticals that we're about to make some seriously weird progress in a short period of time. I'll help qualify all of that over the coming weeks.

Christopher Hilton's avatar

Entirely agree. I'm using it also, judiciously, I hope. I think another aspect worth considering is learning how to ask good questions as measured by outcome, whether by delivering the information we know we want or surprising us with that which we didn't know we want.

I'll share a quick anecdote to illustrate: way back, I was traveling the north of Guatemala by motorcycle. Maps were of little use as almost no roads were marked. At first, I was asking for directions in the form of Is this the road to x? Oh, si, si, si. Often, it wasn't. Come to find out, it was culturally awkward to answer a stranger in the negative, so it was more important to many to say yes and be polite than to say no and give accurate information. One had to ask How does one get to x? or Where does this road go? We need this understanding to make AI to work for us, not on us.

As for seriously weird progress, yes. But we really don't know of what quality and to what ends. Pandora's box contains stuff satisfying that description. And most of us don't have enough conscious oversight on our own inner process to track how we're being affected. A retrospective on social media influence ought to be cautionary tale enough, but with hubris and glee, we seem to be diving into the deep end of an utterly opaque pool... again. I'd almost go as far as suggesting our souls might be on the line this time.

Currently, I think AI is significantly a combination of confirmation bias reinforcer (similar to the anecdote), attention thief (engagement with the interesting but not actionable), dependency trainer (studies already show cognitive decline after only months of use), and of course for mainlining propaganda. Yippee.