Sońǵy jańartý

(Ózgertilgen ýaqyty 39 mınýt buryn)
Deepfakes and AI Scams: How to Protect Yourself from Digital Deception

In an era where seeing and hearing can no longer be fully trusted, the rise of artificial intelligence (AI) presents a growing challenge. By 2026, AI-generated fake videos and voices are predicted to become a primary tool for scammers, transforming technologies once exclusive to Hollywood into accessible tools for anyone.

The Evolution of Deepfakes

Initially used for fun social media filters and character impersonations in films, deepfake technology has now become widely available, even on smartphones. The line between reality and digital manipulation has blurred, making it increasingly risky to believe what you see on screen.

Early signs of this shift appeared on social media with viral videos featuring celebrities speaking in local languages or expressing affection for landmarks. However, behind these seemingly harmless clips lay a darker market for deepfakes, weaponized by fraudsters to steal money.

A deepfake is a synthetic representation of a person's likeness or voice, created using AI. The term itself is a portmanteau of "deep learning" and "fake."

From Celebrities to Everyday Citizens

Soon, Kazakhstani celebrities began appearing in videos promoting fantastical ways to increase income. As the technology became cheaper, creating personalized deepfakes for ordinary citizens became feasible. One notable case involved a resident of Kostanay who was contacted by someone posing as Keanu Reeves.

How AI Scams Will Operate by 2026

Specialized algorithms, known as neural networks, analyze real images, videos, or voice recordings of a specific individual. They meticulously learn facial expressions, blinking patterns, and vocal intonations. The neural network then superimposes one person's face or voice onto another's actions.

By 2026, deepfakes are expected to manifest in three primary forms.

The Growing Threat of Digital Deception

Creating such effects once required professional film studios weeks of work. Today, a "digital twin" can be generated in as little as 30 minutes using smartphone apps costing just a few dollars. In the age of deepfakes, the phrase "seeing is believing" no longer holds true.

A $25 Million Deception: The Costliest Case

The most extensive and technically advanced deepfake attack to date occurred in Hong Kong in 2024. What might seem like a financial officer's nightmare was, in reality, a sophisticated scam.

The target was the Hong Kong branch of the British company Arup Group, known for its work on iconic projects like the Sydney Opera House and the Pompidou Centre in Paris. A finance employee received an email from what appeared to be the company's finance director in London, instructing a secret transaction. Due to the large sum involved, the employee grew suspicious, but the scammers invited them to a Zoom video call for "confirmation."

During the live call, the employee saw the director they recognized, along with several colleagues. Convinced by this virtual meeting, the employee transferred $25 million across five different accounts. It was later revealed that all participants in the call, except for the employee, were real-time deepfakes.

The victim later testified that their "colleagues" in the call looked and spoke like real people. This incident highlights that deepfake victims are no longer just "gullible grandmothers" but can include employees of high-tech international corporations.

Deepfake Scams in Kazakhstan

While the Hong Kong case might seem like a plot from a global corporate thriller, the reality of AI-driven fraud has already reached Kazakhstani cities, targeting ordinary citizens. People are encountering "revived" idols on WhatsApp, receiving video messages from "Ministers of Internal Affairs," and even having "meetings" with Hollywood stars without leaving their homes in Kostanay.

Dimash Kudaibergen's Fake Investment Platform

One of the most extensive AI attacks on the Kazakhstani internet involved a fraudulent advertising campaign for a "major investment project" under the guise of "Kazatomprom." The scammers chose to exploit the national idol, Dimash Kudaibergen, targeting something deeply revered.

The video garnered millions of views, promising Kazakhstani citizens incredible profits from uranium mining. In a particularly insidious move, the advertisement featured not only the singer but also his parents.

To create the fake video, the criminals didn't need to film anything new. They repurposed existing content. The first part of the video used a New Year's greeting clip of Dimash from his Instagram account. For the segment where Dimash's parents "discussed" the project, the scammers copied material from an Instagram post. The original video was a festive greeting from the Kudaibergen family for Nauryz.

Using neural networks, the fraudsters added new audio and synchronized the lip movements (lip-syncing), transforming warm wishes into calls for investment. Lip-syncing is a technology that aligns a person's lip movements in a video with pre-recorded or generated speech. Neural networks enable forcing a person to say anything, even if they never said it in the original recording, making fake videos highly convincing as speech and facial expressions appear natural and synchronized.

This case serves as a textbook example of how personal content can be stolen and reprocessed by AI for criminal purposes.

"Keanu Reeves Wanted to Help": A Kazakhstani Resident Targeted by a "Hollywood Star"

While Dimash was used to sell the idea of "getting rich," another scam preyed on the dream of celebrity connection. In 2024, a 62-year-old resident of Kostanay fell victim to a classic romance scam. It began when an account closely resembling "Keanu Reeves' profile" liked her social media post.

The "actor" engaged in polite correspondence, even sending a photo of his "passport" and video messages (potentially created with deepfake technology or editing) to convince the woman of his authenticity. Eventually, "Keanu" claimed to be in serious trouble: his accounts were blocked due to legal proceedings, and he needed money for medical expenses. The woman, genuinely wanting to help, was prepared to transfer a large sum.

Fortunately, vigilant bank employees noticed that the account the woman intended to transfer money to belonged to an unfamiliar gaming company and alerted the police. This situation is not unique but part of a global trend. A similar case in Europe involving "Brad Pitt" ended far more tragically: a 53-year-old French woman not only transferred €830,000 to scammers but also destroyed her real family for an illusion created by algorithms.

Digital Clones: How Scammers "Stole" the Tengrinews.kz Brand

Our publication has also been targeted by cybercrime schemes. Scammers understand that people are wary of unfamiliar links but implicitly trust logos seen in their daily feeds. Consequently, they began creating advertising accounts visually similar to the official Tengrinews.kz pages, using brand colors, fonts, logos, and even photos and names of our journalists.

These accounts then ran targeted ads about "prize giveaways," "social payments," or "VAT refunds." To enhance credibility, criminals used deepfake videos featuring AI replicas of popular news anchors announcing the "official" start of payments.

Clicking on these links led users to clone websites mimicking official bank or government pages. The predictable outcome: users were prompted to enter their personal identification number (IIN), card number, and crucially, the CVV code. From that moment, their accounts were compromised.

It's vital to understand that exploiting the brand of Kazakhstan's largest news portal is an attempt to bypass the audience's psychological defenses. When a news post appears to be from Tengrinews, many people's critical thinking shuts down.

How to Avoid Falling for Fake Publications

Our editorial office was compelled to issue a special warning to readers: "Scams on Social Media: Tengrinews.kz Warns About Fake Posts."

Remember: in the digital world, even a familiar "header" on a website cannot be trusted. Always verify two parameters:

Cybercrime Statistics in Kazakhstan

According to data from the Ministry of Internal Affairs and the Committee on Legal Statistics of the Prosecutor General's Office of the Republic of Kazakhstan, internet fraud remains one of the primary threats to citizens' safety. In 2025, the total damage from cybercrimes in Kazakhstan reached 16.4 billion tenge, an absolute record in recent years. The Unified Register of Pre-trial Investigations recorded approximately 20,000 criminal cases related to internet fraud (Article 190, Part 2, Item 4 of the Criminal Code of the Republic of Kazakhstan), accounting for nearly half (48%) of all fraud cases in the country. Almaty (about 40% of all cases), Astana, and Shymkent lead in the number of cybercrimes.

Inside Look: How Digital Illusions Are Created and Why We Believe Them

Behind stories about "Keanu Reeves" or multi-million dollar thefts in Hong Kong lie algorithms amplified by an understanding of human psychology. To grasp how deeply neural networks have infiltrated our daily lives and whether we can distinguish generated content from reality, we consulted professionals who know the field intimately.

Our experts, ranging from AI artists to computer forensic specialists, explain how current deepfakes are made, why scammers have become adept psychologists, and what to do when "your own eyes deceive you."

"Soon, Even Experts Won't Distinguish Neural Network Work from Reality"

Riz Yesentayev, a neuro-artist specializing in creating images and videos with neural networks, notes that current deepfake creation tools are widely used in AI production. He outlines several methods for using someone else's likeness:

"As for fraudulent activities, the opportunities for criminals are vast."

We asked the neuro-artist Yesentayev for clarification: "Can experts currently identify deepfakes with 100% certainty?"

"It depends on the technology used. Yes, there are technologies where even a specialist would find it difficult to spot the difference," he replied. "And with the rapid development of AI technologies, it will soon become impossible for experts to distinguish neural network work from actual footage."

"How can an ordinary person, not a specialist, differentiate a fake video?"

"You need to look closely at artifacts around the face: unclear edges, different textures and clarity of the face and body remnants; problems with animation, unnatural movements, and most importantly – lip-syncing, i.e., the movement of the lips."

"Often, neural networks can be detected by the lips, as these are very active movements during speech and they are not always perfectly synchronized with the audio."

"This technology has two sides," the expert emphasizes. "On one hand, huge risks: fraud, fake news, political manipulation. It's already possible to create a video where 90% of people will believe whatever the president or a popular blogger says."

"On the other hand, it's an amazing creative tool. We can now create in days what used to take months of filming and millions in expenses. You can 'resurrect' an actor, create a character that never existed, or shoot an advertisement with a brand ambassador's face, even if they are physically on another continent."

"Therefore, I believe that banning or severely restricting the technology is futile; it will continue to develop regardless. It's more important to learn to live in a world where video is no longer irrefutable proof."

"Michael Jackson" Addresses Tengrinews Readers

The best way to understand how far technology has come is to see it in action. Currently, "King of Pop Michael Jackson" delivers a message relevant to the article's content to Tengrinews.kz readers in the following two videos:

This clip is a 100% deepfake created by neuro-artist Riz Yesentayev specifically for our editorial team.

How Was It Made?

"Creating such a clip involves four stages: image generation, animation, audio generation, and lip-syncing. It took approximately 30-40 minutes in total," says the expert.

The Mimicry Factory: Half the Internet is Bots

To understand how scammers automate deception and how your chat partner might be lines of code, we consulted Elkhan Oralbayev, who calls himself a bot developer.

Elkhan creates bots that "live" on social networks and messengers for customer interaction and automating routine processes. He is convinced that we are already living in a "dead internet," where generated reactions are indistinguishable from real ones.

"Elkhan, is it true that any post on social media can be flooded with bots, and we sometimes read responses from artificial intelligence instead of people?"

"Let me be clear. According to various data, today, 50% or even more of comments on the internet come from bots. Modern AI bots can search for information based on given triggers, analyze context, and leave comments according to their programmed prompts. This happens automatically, 24/7. Bots don't get tired, don't sleep, and can imitate any emotion – from surprise to angry debate."

"What are the most dangerous scenarios for ordinary users involving AI today?"

"The main thing is the evolution of classic schemes. Previously, scammers would call with a distorted voice saying 'my brother had an accident,' but now they use the actual voice of your loved one. AI can even create a short video with the image of the desired person on Telegram. When combined with spoofing technology (number masking), such an attack can be fatal when your phone screen shows a real contact from your phone book. But there's a simple defense: if you receive a call from a relative via mobile, hang up immediately and call them back yourself. Only by calling back will you reliably connect to the real person, not the scammers' server."

"We are accustomed to thinking of 'hacking' as a complex technical process. Has AI made hackers' lives easier?"

"AI creates malicious software much faster than humans, but 'breaking' a person is still easier than breaking a system. Phishing through compromised friends' accounts is now flourishing. You receive a private message with a link: 'vote for the photo,' 'watch my video.' The excuses can be anything. Expert's advice: if the link doesn't lead to popular services (YouTube, Instagram, TikTok) but requires authorization or voting – don't click. Contact the sender through another messenger. Treat any requests for money similarly."

"They say scammers are starting to use neural networks to create informational messages and websites. How does AI phishing differ from the old kind? Now that grammatical errors are gone, what indicates a fake?"

"Yes, visually, phishing tools are now indistinguishable from prototypes. But the signs for detection, surprisingly, remain the same: similar domain names with one or two character errors, inconsistencies in information (irrelevant address, place of work), overly general wording. The logic of the attack also remains the same – a short confirmation window. And the psychology – 'blocking,' 'bonus credited,' 'time limited.' Also, the classic – requests via online forms, where data should be known or is excessive for this request: IIN, bank card details with CVV code, etc."

"Tell us about a case you handled where AI was used in the most unexpected way."

"I was more of a consultant. The digital persona of a girl on a dating site. Long correspondence. Photos. Videos. The result – a request for financial assistance with a predictable outcome."

"When there's a crime scene, even if it's digital, what happens? What does a computer forensic specialist's 'call out' look like?"

"Everything depends on the nature and circumstances. In any case, like classic forensics – examining the crime 'scene' (phone, computer, server), collecting evidence (logs, files, memory dumps). Sometimes, and often, this is done remotely. Rarely – visiting the location of the equipment or even a scammers' call center to stop attempts to erase digital traces. But sometimes there are peculiarities: for example, studying and collecting data from a fitness bracelet to determine the exact time and circumstances of death based on the dynamics of vital signs."

"There Are No Victims Among Fools"

We've seen how deepfakes are made and how bots flood our space. But the main question remains: **knowing these technologies exist, why do we still fall for them?**

To understand the "anatomy" of cybercrime and what a modern "digital thief" looks like, we spoke with Anatoly Remnyov, a forensic specialist in computer forensics.

Forensic (forensic) is a set of measures or a specialist in investigating financial fraud, corporate fraud, and cybercrimes, protecting businesses, and preparing evidence for court, including financial investigations, digital data analysis (computer forensics), counterparty checks, and corruption detection.

His job is to find digital traces where criminals try to make themselves invisible. And his experience forces a re-evaluation of personal security.

Our first question: compared to 2-3 years ago, how has the average cyber fraudster in Kazakhstan changed? What new tools have appeared?

The expert's answer: "The average fraudster is too general a definition. We must understand that behind every fraud scheme are not only the final executors but also the organizers. It is they who are much more experienced in user psychology, the psychology of the average internet user, and methods of deception. The essence of fraud has not changed. Its format has changed. Our society has become digital. All negative and destructive phenomena have smoothly and harmoniously, however surprisingly it sounds, moved from our daily lives into the digital ecosystem."

"Regarding tools – these are still the same four main social engineering methods:

Depending on the nature and scope of the fraud, a specific classic method is chosen, and a particular scheme is adapted to it. The essence is the same – to gain the user's trust and force them to voluntarily part with their money or property. The problem is that fraudsters are constantly improving their qualifications in such specialized areas as profiling every day."

Profiling (English. profiling – "creating a profile") is a method of analyzing a person's behavior, speech, emotions, and external signs, which allows determining their psychological characteristics and predicting potential actions. Initially, profiling was used in criminalistics and security services – for example, to identify suspicious passengers or create a criminal's portrait. Over time, it began to be used in other fields: business, negotiations, and daily communication. In the context of cybercrime, profiling helps select potentially vulnerable individuals and develop effective communication tactics with them – for example, by exploiting fear, urgency, or trust. This is why many deception schemes seem convincing: they are based not only on technology but also on an accurate understanding of human psychology.

"Name one AI-used scheme currently widespread in Kazakhstan that many people don't even suspect," we asked.

"Unfortunately, I cannot give an exact assessment of the prevalence, but I believe it is the scheme of advertising high-yield investments or non-existent stocks by copying the image and voice of popular politicians, bloggers, and public figures. This looks very natural within the normal information noise on social networks, involving artists, actors, and opinion leaders in various advertisements."

"Deepfake is no longer just about swapping faces in a video. In your experience, what are the most cunning uses of synthetic voice and video?"

"You are right, deepfake has long gone beyond 'pasting a face onto a video.' The most advanced schemes are combinations of video and interactive reaction scenarios (behavioral context), where AI imitates not only the appearance but also the communication style. This is often implemented through video conferences. In my personal experience, I had to analyze a recording of a video conference from the presentation of one of the financial pyramids, where the company's 'founder' interacted with participants and answered their questions. I also encountered popular 'short videos' on the theme of 'Mom, I had an accident.'"

"We are seeing the emergence of 'synthetic personalities.' Will banks completely abandon remote identity verification because AI is indistinguishable from humans? What will happen then?"

"I don't think so; the remote identity verification procedure will remain. However, it will undergo significant changes. The most complex issue here is collecting biometric information (fingerprints, retinal scans) as a benchmark. Perhaps, with the development of technologies and the development of models for identity verification based on a set of technical factors, some form of two-factor verification will be used."

"Are there programs in Kazakhstan that can 100% distinguish a person's voice synthesized by AI during a phone call?"

"I am not aware of such software. And technically, such software is impossible for a number of reasons. The quality of speech synthesis is already at the 'human-human' level: modern AI models imitate breathing, pauses, mispronunciations, emotions, intonations, adapt to noise and 'telephone' sound quality. Many artifacts (AI signs) are simply lost during digital signal processing over communication channels, leaving nothing for the detector to 'catch.' Yes, there are systems (mostly in banks) that analyze the spectrum, speech rhythm, and check behavioral markers, but they are often based on comparison with reference recordings, and the result of their work is probabilistic – the assessment is given in percentages."

"The question 'Is this a real voice?' no longer has a precise and reliable answer. The right question is different: 'If the voice cannot be trusted, how can identity be verified?'"

"Scammers have started using neural networks to create phishing emails and websites. How does AI phishing differ from the old kind? Now that grammatical errors are gone, what indicates a fake?"

"Yes, visually, phishing tools are now indistinguishable from prototypes. But the signs for detection, surprisingly, remain the same: similar domain names with one or two character errors, inconsistencies in information (irrelevant address, place of work), overly general wording. The logic of the attack also remains the same – a short confirmation window. And the psychology – 'blocking,' 'bonus credited,' 'time limited.' Also, the classic – requests via online forms, where data should be known or is excessive for this request: IIN, bank card details with CVV code, etc."

"Tell us about a case you handled where AI was used in the most unexpected way."

"I was more of a consultant. The digital persona of a girl on a dating site. Long correspondence. Photos. Videos. The result – a request for financial assistance with a predictable outcome."

"When there's a crime scene, even if it's digital, what happens? What does a computer forensic specialist's 'call out' look like?"

"Everything depends on the nature and circumstances. In any case, like classic forensics – examining the crime 'scene' (phone, computer, server), collecting evidence (logs, files, memory dumps). Sometimes, and often, this is done remotely. Rarely – visiting the location of the equipment or even a scammers' call center to stop attempts to erase digital traces. But sometimes there are peculiarities: for example, studying and collecting data from a fitness bracelet to determine the exact time and circumstances of death based on the dynamics of vital signs."

"There Are No Victims Among Fools"

Most people are confident: "I won't be fooled, I'm not stupid." In reality, who becomes a victim of AI-driven fraud?

Let's put it this way: there are no victims among fools. Even competent people can fall victim at a moment of vulnerability, because fraudsters first collect information about the potential victim, and the more complete their profile, the higher the guarantee of success. There is no typical victim profile, or rather, no profile at all. There are recurring patterns based on behavioral psychology and situational manipulation.

"A typical AI victim," as the criminalist jokes, "is not carelessness, not magic, not experience, not intelligence, but habituation to procedures."

Deepfakes for Entertainment

In conclusion, let's touch upon another danger of deepfakes, but one directed at their creators. In today's world, even something fake created not for criminal purposes but for hype on social media can lead to legal trouble.

The most resonant case in Central Asia was the incident last year in Bishkek involving a blogger known by the nickname Akuma. The girl published AI-generated images on social media depicting her with the President of Kyrgyzstan, Sadyr Japarov. Although it was clear the images were AI-generated, the court found them to be insulting to the head of state's honor and dignity. The blogger was found guilty of disseminating false information and fined.

Indeed, social media users reacted to this incident with outrage: "On the other hand, how many photoshops have you seen of Trump, Macron, Merkel, and others? These are developed countries, but they are not prosecuted or fined for such things," they wrote.

To understand if such situations pose a threat to Kazakhstani citizens, we consulted Sergey Utkin, a well-known lawyer.

"According to the general rule in Kazakhstan (Article 145 of the Civil Code of the Republic of Kazakhstan): the use of any person's image is possible only with the consent of that person or their heirs," he explained.

"Are there exceptions where consent is not required?"

"Exceptions exist for journalists, as stipulated in the laws 'On Mass Media' and 'On Online Platforms.' Consent is not required if:

When using others' photos, remember that they may have an author and copyright holder, so the issue of copyright protection is a separate topic.

Now, regarding deepfakes. Essentially, this is the dissemination of false information, which is prohibited by law, and administrative or even criminal liability is provided for it. However, if the deepfake is clearly indicated as such, for example, it's an AI-generated collage, or a satirical image, etc., and it is fully understandable to users that they are being deceived and this is not a real photo or video – then liability for disseminating false information should not arise.

Therefore, in such cases, including those involving AI, one should be guided by the general rules and exceptions mentioned above and refrain from disseminating false information.

Guideline: Don't Become a Victim of Cyber Scammers

The main conclusion from our conversations with experts is alarming: eyes and ears are no longer reliable witnesses. If you see your mother's face on your smartphone screen or hear your boss's voice – it might just be a collection of pixels and frequencies that can be created in just 30 minutes. To protect yourself, you need to shift from the role of "victim" to "active party." Here's a step-by-step algorithm:

1. The "Counterattack" Rule: Take the Initiative

If you receive a call (via messenger or phone) and are asked for money, threatened with an accident, or offered "secret investments":

2. Block "Digital Time Pressure"

Scammers always create a sense of urgency ("Need it now!", "The account will be blocked in five minutes").

Take a 10-minute break. Hang up the phone, drink some water. During this time, your emotional state will normalize, and you can assess the situation calmly.

Remember: no official body resolves critical issues over the phone in five minutes.

3. Hygiene of Links and "Digital Footprints"

4. Paranoid Filter for Media and Brands

Saw a news item about payments on Tengrinews.kz? Check the address in your browser. If it says tengri-news.com or tengrinews.kz.web.ru – it's phishing. The real address is only one: tengrinews.kz.

"Don't Trust Your Eyes." Conclusion

In the last century, Kozma Prutkov had a famous aphorism urging critical thinking: "If you read the inscription 'buffalo' over a picture of an elephant in a cage – don't trust your eyes."

In 2026, this saying will no longer work. The elephant in the cage, even with the inscription "elephant," does not guarantee it is truly an elephant. Interestingly, Kozma Prutkov himself turned out to be a fictional character later: a group of literary figures hid under this pseudonym.

Videos and photos are no longer proof of reality – they are now a "playground" and an art form. This is disorienting: it becomes unclear what can be relied upon, what can be accepted as fact. Will we have to undergo multi-stage verification for any action in the future? Will an "Information Ministry" emerge? Or will we, like in past centuries, rely only on live witnesses?

This is not the limit: the next stage might be holographic scams, where not only sight but also touch will be needed for authenticity. At least, until tactile deepfakes appear.

We hope this is not the immediate future. For now, simple rules remain:

Jańalyqtar

Jarnama