Table of Contents 64q43
Photos and videos generated with AI (Artificial Intelligences) have been the talk of the entire internet for some time now. Whether to create an avatar or a special date video, these creations are very present in everyday online life, but they are often confused with real images, which can lead to the spread of fake news — the fake news. One of the most affected audiences belongs to the age group over 50 years, especially the elderly, and one of the places that most contain misleading images generated by AI is the Facebook. Check out some data from published studies on the subject. 225j2a
research data 4r4b2h

The content generated by artificial intelligence has become increasingly popular, especially on social media, with images that fit into the strangeness valley - Where Uncanny Valley. This concept refers to images that appear real, but have artificial elements that cause discomfort to those who see them. Even though there is a high similarity with real humans, images in the Uncanny Valley have an artificial characteristic, even though they closely resemble the natural. This perception, apparently, is related to the most young.
Older people have difficulty identifying these strange, unnatural elements in media generated by artificial intelligence, making them more likely to be deceived. Art produced by AI It's not that obvious to everyone, and research shows that people over 50 are falling for these visual tricks on social media en masse.
Platforms such as Facebook have become increasingly popular among older people and seniors looking for entertainment and companionship, as younger s have migrated to more current apps like Instagram or TikTok. Apparently, Facebook's algorithm has been intentionally directing AI-generated images into s' feeds with the aim of selling products and attracting followers, according to an article by researchers at Facebook. Stanford University and Georgetown University.
Scientists still do not have definitive answers about the psychological impacts of art generated by artificial intelligence, since image generators became public recently, around two years. But understanding why older friends and relatives may feel confused can provide important clues to help prevent them from becoming victims of fraud or misinformation.

Understanding how this scenario plays out is important because technology companies like Google tend to ignore older s during internal testing, according to Bjorn Herrmann, cognitive neuroscientist at University of Toronto which studies the impact of aging on communication.
Even though the aging process — particularly the cognitive process — may seem like the most obvious explanation for this incompatibility with current technology, early research suggests that a lack of experience and familiarity with AI could help explain understanding among younger audiences. and the oldest. The survey, carried out by almost 1.300 North American adults aged 50 and over, shows that only 17% of participants said they had read or heard about AI.
So far, the few experiments to analyze older people's perception of AI seem to be close to what's happening at Facebook. In a study recently published in the journal Scientific Reports, scientists showed 201 participants a mix of AI- and human-generated images and assessed their responses based on factors such as age, gender, and attitudes toward technology. The team found that older participants were more likely to believe that the AI-generated images were made by humans.
While research into people's perceptions of AI-generated content is limited, researchers have found similar results with AI-produced audio. Last year, Herrmann reported that older individuals had a decreased ability to discern between human-generated speech and AI-generated speech compared to younger individuals.
In general, Simone Grassini, a psychologist at the University of Bergen in Norway, believes that any type of AI-generated media could more easily mislead older viewers due to a broader “overall effect.” Both Herrmann and Grassini suggest that older generations may not have learned about the characteristics of AI-generated content and encounter it less frequently in their daily lives, making them most vulnerable when that content appears on their screens.
Reduction in cognitive capacity and hearing loss (in the case of AI-generated audio) may have their influence, but Grassini also noted that this effect also happened in people between the ages of forty and fifty. Young people have grown up in the age of online misinformation and are accustomed to doctored photos and videos, Grassini added.
How to protect yourself from fake AI-generated images 5n364h

A rapid evolution of artificial intelligence it certainly brings many benefits to society, but as seen in this content, we must keep an eye on the consequences that this technology can cause us. Although its potential is vast, covering areas such as health and education, the possible disadvantages cannot be ignored. Haywood Talcove, CEO of a cybersecurity organization, warns seniors about the growing use of AI in romance fraud schemes, attempted ransom scams, and tax fraud against the government.
Talcove stated that there are a large number of people dedicated to developing this technology for good, working tirelessly. On the other hand, there is also a group equally dedicated to applying their skills on the downside, employing AI to hone your antics and exploit older people and the elderly. With that in mind, here are some points to help you protect yourself and also protect those people who need these guidelines:
- Romantic scams: With advances in artificial intelligence, scammers can now create visual images that look extremely real. They also have the ability to generate a voice designed to attract a specific victim. These realistic images and voices are used to persuade the victim to participate in video meetings. Here the tip is to always confirm the veracity of the photo or video, comparing it with other media from the person themselves or even making live calls;
- Fake rescue: On many social media platforms, people have short videos where their voices are recorded. A fraudster can use one of these recordings to create an extremely realistic duplicate of the person's voice. This way, the scammer can imitate the voice of a child, grandchild or other family member to conduct a conversation. During the conversation, ask specific questions and try to confirm the identity of the person on the call. Sometimes the voice alone is not an indication that that person is who they say they are;
- Think before you click: The old trick of clicking on links can also be associated with artificial intelligence. It is important to be cautious when opening links, whether via email or text message. If the sender is not recognized, avoid clicking the link. that our smartphones are also computers. A single click on a suspicious link can result in malware being ed onto your device.
- Trust your intuition: Artificial intelligence can create messages that appear to come from people you know, using publicly available information. This increases the risk of falling for scams. Be wary of unsolicited emails and messages that ask for personal information, even if they appear legitimate. Trust your intuition when noticing any red flags, such as spelling mistakes, strange or repetitive formatting. If in doubt, the company or person directly via phone, not email, to confirm authenticity;
- Keep your software up to date: Artificial intelligence can also identify vulnerable devices by checking software versions or security flaws. To keep your devices safe, always keep your device (smartphone, notebook, tablet, etc.), software and applications up to date. This helps protect against potential malware exploits.
How to protect the elderly 6p66b

Despite the challenges in identifying fake content (fake News) as they proliferate online, seniors often have a different view of the bigger picture. In fact, they may be more likely to recognize the dangers of AI-generated content than younger generations.
A survey of the MITER-Harris Poll, involving more than 2.000 people, indicated that a larger portion of Baby Boomers (born in 1946 and 1964) and Generation X (1965 to 1980) is concerned with the consequences of deepfakes — effects and edits in photos and videos that are quite difficult to discern — compared to participants in the Generation Z (1997 to 2012) and the Y generation (also known as Millenials, born between 1981 and 1996).
Older age groups had a higher proportion of participants who advocated regulation of AI technology and greater investment by the technology industry to protect the public. The research also revealed that older adults can more accurately distinguish between false headlines and stories compared to younger adults, or at least identify them at similar rates.
Older adults also tend to consume more news than their younger peers and may have accumulated extensive knowledge about specific subjects throughout their lives, which makes it more difficult to deceive them.
Os putschists have been using increasingly sophisticated generative AI tools to target older adults. They can employ audios and images deepfake extracted from social networks to simulate a family member calling and asking for money, or even falsifying the appearance of a relative in a video call.
Fake videos, audio and images can also influence voters older people before the elections. This can be even more harmful, since people in their fifties or older tend to make up the majority of voters in countries like Brazil itself.
To help seniors in their lives, Hickerson highlighted the importance of spreading information about generative AI and the risks it can pose online. One way to start educating them is highlighting characteristics of these images that are clearly false, such as overly smooth textures, strange-looking teeth, or patterns that repeat seamlessly across photo backgrounds.
She adds that we can also clarify what we know and don't know about social media algorithms and how they affect the elderly population. It's also part of ing that misinformation can even come from friends and family.

With the advancement of deepfakes and other AI creations every day, even the most experienced technology experts may find it difficult to identify them. Even if you consider yourself pretty knowledgeable, these models can be disconcerting. The website This Person Does Not Exist, for example, offers incredibly convincing photos of AI-created fake faces, often with no obvious signs of their computational origins.
Although researchers and technology companies have developed algorithms to automatically detect fake media, they are not infallible, and constantly evolving generative AI models tend to outperform them. One of the most famous AIs in image generation, the midjourney, faced challenges for a long period to create realistic hands, before finally achieving success with a new version released a few months ago. To address the growing wave of imperceptible false content and its social consequences, Hickerson emphasized the importance of regulation and corporate responsibility.
Find out more about this and other news at Showmetech TRIO, your weekly tech news trio:
And you, what do you think of this wave of AI-generated misinformation? Have you ever been in a similar situation? Tell us Comment!
See also:
Tourists will now be able travel by balloon into space in 2025.
With information from: The Daily Beast, NORC, NCBI e American legion
reviewed by Glaucon Vital in 26 / 3 / 24.