AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Verification needs to work the other way around, some kind of verifiable chain of trust for photos and videos from real cameras. Watermarking all generated media is impossible.
Partially agree.
However, this problem has existed with scam e-mails since the 90s.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.
> Information found online will also no longer be trustable
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
All these are true, but just as it happened before the internet, it's accelerating even further. There are clear costs that cannot just be hand waved away.
I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble.
We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time.
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
Touching grass. Valuing in-person connections. Focusing on the community, meatspaces and actual people around you.
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.
People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem
People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it.
2FA has tried to solve exactly this. Not many attacked people will hand over their password AND their phone. Yes I know, they might hand over one authentication code (and I know people who did exactly that)... We should also look into reducing the attack surface - if you get Instagram hacked you shouldn't get your Facebook hacked as well. But the current big tech centralization leads us to that single point of failure, because they don't care about the user's concerns only market grab. So... what now? Do we get the politics into this?
Best thing I think of is domain names. Domains are tied to addresses and billing, and sites are people or businesses, with physical locations one can visit.
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.
linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
> damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
Ok fine, let's put it in the context of business. Your competitor impersonates your customer, gives you bad instructions. After following the bad instructions, you lose the contract with your customer, and your competitor (the attacker) is free to try and replace you.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
Or even better, open the on-prem AI portal and type something like "I just got a suspicious call from client X, but I am on a lunch break. Call him and use a fake video of me. Ask him if what he said is true..."
Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
You're not sure if human to human interaction is intrinsically more valuable than a human talking to a facsimile? That feels like a very dangerous position to hold for one's ethical calculations and general sanity. I'm clinging tightly to the value of the bond with other people, even the passing connection, but certainly with my family members as this article is about.
> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
This phenomenon (or a closely related one?) is recognized and known as Kotov Sydnrome in the context of chess.
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
I have a systematic way of approaching this kind of situation, where you have to rapidly estimate a thing, commit to the estimate and are judged by the quality of your estimates in the long run; my approach is to first make a guess based off my gut, and then to pause and make a bet with myself, did I guess high or low? If my gut then says that my first gut instinct was too high or low, I adjust from there. I can't guess great the first time, but this two-stage guessing works a lot better for me.
I'm sure I'm not the first to use this technique, but I don't know what it's called.
Dissonance between what you instinctively believe and what you think the other person wants you to say.
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
> Good way to push someone towards paranoia and psychosis.
Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.
This is the basis of the virtual kidnapping scam/grandparent scam, or panic manipulation more generally. The manufactured urgency keeps them from doubting: the voice on the phone being off is just fear, or a bad connection, for example.
I have personally intervened in one of those when I heard someone reading off a 6 digit number.
There's also another phenomenon which is that whatever the latest idea is, it must be the best. Many people do this mistake and even convince themselves of being right now because "they used to think like that" before.
So at each stage in the loop they are always super convinced of the position.
Even not being 100% confident, at some point people have to decide what to do.
Actions might include some continuous checks in them, like the famous plan, do, check, act.
Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.
This is why you need a phrase that you've never shared in a text or on social media that you can use so your family knows it's you. Especially to protect them from scammers pretending to be you.
I bet that a confident scammer is prepared to deal with things like that. They want to put you in a state where you are under time and emotional pressure and your "relative" will have a well practiced response why they can't answer your weird questions.
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
Or just find a shared memory/moment not available on the internet when in doubt. I don't think people will be that eager to remember another passphrase.
> The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.
> "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."
I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
More than a year ago I suggested that our family adopt a sign/countersign type of authentication (I say "the migrating birds fly low over the sea", you say "shadeless windows admit no light" ;-). It was clear at that time that we were going to start seeing scams get more advanced and hard to tell from valid requests for money, for example.
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
The sign/countersign still works even if it's unilateral. You say "the migrating birds fly low over the sea", they say "I told you already, we're not doing this stupid thing", and now they are authenticated.
This is scary but also kind of hilarious. You should feel proud your aunt still judges first before believing anything online. I've heard so many stories from friends lately. These scams are getting crazy. Scammers are already using pictures of influential people and even jumping on video calls pretending to be them.
Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
i wonder what is the captcha equivalent of ai bots? ask about taboo topics to rule out commercial models and ask about specific reasoning questions that trip ai like walking vs driving to car wash? or your own set?
At this point "spotting AI" is IMO an irrelevant skill. It's something to be aware of but a bunch of the time I can't tell even with an extended look on static images, or if I'm on a phone and scrolling then nothing really tweaks automatically - perceptually the flaws blend exactly as you'd expect them to.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
Recently one of my friends got email hijacked and whatever entity it was seemingly used her past sent emails as a training corpus to construct some very convincing pleas for donations involving a dog rescue she's been operating for several years.
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
AI slop detection requires some fine developed intuitions that come from decades-long exposure to both journalism/marketing slop as well as high quality literature. Because AI was aligned out of the hell by low level journalism newly graduates.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
Man, I have nothing against liberal arts per se. On the contrary, I think that a tragedy of our time is that people disconnected from things like literature, history and art in the name of over-specialization and an excessively utilitarian approach towards education.
But I am very critical of what pass as the modern liberal arts academic establishment. To avoid a very long text, let's say that my view is heavily influence by Ortega y Gasset.
The deeper problem here isn't that deepfakes are too good - it's that every "proof of humanity" test converges to the same bag of tricks. Shared secrets, liveness checks, biometric challenges. An attacker who studies the test can pass it. We keep building Voight-Kampff machines without asking whether the Voight-Kampff framing is the right one. The question isn't "can you tell this is real" - it's "what would you accept as proof, and can that proof be synthesized?"
The author should have mentioned that this was partly an article to whitewash Netanyahu, but this coming from the BBC (and from the mainstream British media as a whole) that was to be expected.
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.
These options are not available to recent immigrants, people with foreign documents and people without a registered address. I spent a lot of time working around those limitations.
Or in general, a way to digitally sign a tamper-free video recoding made with a camera from a reputable manufacturer. Maybe a regular iPhone already has enough integrity checks and security contexts to achieve this.
I'm almost certain that an iPhone camera can go that, and the reason that Apple controls the full stack. It's necessary but not sufficient, since it's missing the identity maintenance when media leaves the device. Apple would have to place a cryptographically signed digital watermark into a global blockchain so that the analog hole can be closed. All devices that present that media back to a human would need to verify the contents provenance chain back to the initial capture device.
There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.
All of those have their issues.
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws.
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).
Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
The communication channel is what you trust. So you would call the person using that trusted channel.
It's just like when you get a scam email or popup from "Microsoft" saying your laptop is compromised and you need to call their number ASAP.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
Not GP, but there's a lot of damage that can be done with impersonation.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
We're in deep shit.
“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
“Oh it really is you Johnny!”
We’re all going to have to start communicating this way. Best of luck.
I offer consulting services on the side to help professionals hone these skills. $250 / hour.
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
This is the downside of being a human being.
I'm sure I'm not the first to use this technique, but I don't know what it's called.
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.
I have personally intervened in one of those when I heard someone reading off a 6 digit number.
So at each stage in the loop they are always super convinced of the position.
Actions might include some continuous checks in them, like the famous plan, do, check, act.
Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.
https://en.wikipedia.org/wiki/Solipsism
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
> The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.
> "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."
https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...
It feels good to connect with humans that way.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
https://jetzt.cx/
(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)
https://ars.electronica.art/panic/de/view/reverse-turing-tes...
(I.e. trying to hide the fact that you're human, among a group of AIs)
How was this solved, actually? More training data, or was there more to it?
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
I truly believe that it is a crime against humanity
Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
Tell us more about this axe you appear to need to grind.
But I am very critical of what pass as the modern liberal arts academic establishment. To avoid a very long text, let's say that my view is heavily influence by Ortega y Gasset.
But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.
https://www.etsy.com/listing/1667241073/realistic-silicone-s...
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.
There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.