10-13-2020 , 10:42 PM
https://www.avira.com/en/blog/how-to-spo...w-19101225 How to spot a bot on social media
13 October 2020 by Diana Plutis
8 hours ago Known in the early days of the internet as “software robots”, bots are software applications that perform automated tasks. The internet as we know it today has been shaped by bots.
Good bots vs. bad bots
Web crawlers or spiders browse web pages and index content so that search engines can provide relevant results for our searches. Chatbots are programmed to interact with us in real-time to provide information. Game bots act as players in multi-player online games to make it more entertaining. Some bots even get creative and write poetry or compose music. But there are also bad bots, and, unfortunately, the number of bad bots is growing.
In 2019, bad bots accounted for 24.1% of all internet traffic, according to Imperva’s Bad Bot Report. Bad bots spread spam messages and malware, steal login credentials and sensitive data, conduct denial-of-service attacks, and disseminate disinformation on social media, among others. The bad social bots that infiltrate social networks have been raising concerns in the past years as they became involved in social and political discussions. Social media bots: threats and challenges
Bad social media bots are used to create fake profiles on social media and generate likes and followers. But they become truly dangerous when they mimic the behavior of real users and actively engage in public discourse. The prime example of the danger posed by such bots is the 2016 U.S. presidential election. In the aftermath of the Cambridge Analytica scandal, more social media users became aware of how politically partisan groups can use social media to spread disinformation and fake news. Bots played a major role in the spread of disinformation. Twitter identified 2,752 bot accounts linked to the Russian Internet Research Agency, one of the main players in the 2016 disinformation campaign, plus 36K Russian bots, according to a report by Talos Intelligence.
Statistics provided by TheNextWeb reveal that Facebook deactivated 694 million fake accounts in 2017, and the number increased over the years, with more than 1 billion fake accounts being shut down each year. Behind fake accounts lie people with malicious intent or social bots programmed. It is estimated that 5% of all monthly active accounts are fake.
There is a growing concern surrounding social media, especially in the wake of the upcoming U.S. elections. A survey conducted by Avira and Opinion Matters found that only 24% of Americans believe the 2020 elections will be “free and fair”, with 50% seeing misinformation on social media as the main interference in upcoming elections.
While tech platforms are developing machine learning models to identify fake accounts and enforcing stricter rules on posting content related to social and political issues, there are also steps you can take to protect yourself from bad social bots.
Tips for spotting a bot on social media
Profile information
The profile picture is one of the first things that people notice on social media. Not having a profile picture – or having a generic one depicting a landscape or a cute puppy – is seen as suspicious. The same goes for account names that use numbers instead of names and the absence of location data. However, these are not clear signs that an account is fake. It might belong to a person concerned about personal privacy and avoids using personal photos or prefers the shield of anonymity on social media. Twitter points out that it’s important to look at “the holistic behavior of an account, not just whether it’s automated or not.” To figure out whether you are dealing with a bad bot, you need to go beyond the first impression and look in detail at the activity of the account, its network, and content.
Activity
Bots usually post very frequently throughout the day. Their daily activity is more intense than that of an average social media user. However, oftentimes, the posts are not original: bots are created to amplify the messages that their creators want to disseminate, so they will like and share a lot of posts without expressing an opinion on the content. Their job is to spread the message as quickly as possible and get it trending.
On Twitter, bots often use many hashtags in their posts to get the hashtags trending. Using a lot of hashtags, sometimes not related to the content of the post, is a sign that the post might belong to a bot. Watch out for hashtags used in a spammy manner and check whether the hashtags are used by other accounts that you know and trust.
The pattern of activity is also an important indicator of the account’s authenticity. If an account is active on specific days or constantly posts at the same time, it’s probably automated. Not all automated accounts are bad, of course. But if the activity is related to topics that cause dissent and the account has activity only during specific periods – for example, during the election season – it is a sign that its posts need to be carefully considered.
Network
Bots created with the purpose of artificially amplifying a post or tweet are oftentimes part of a network – a so-called botnet. They are programmed to act in a similar way and follow the same topics and hashtags. You can check the number of followers/friends and whether the accounts in the network seem real. For example, a generic Facebook profile created not long ago, which is following thousands of people but has only a few hundred followers, is suspicious to say the least.
You should also be careful with friend requests on Facebook. Bots try to befriend people to grow their network. If you don’t know the person, you have no friends in common, and the friend request does not include any personalized message, it’s best to decline the friend request.
Content quality
There are several signs that an account represents a bad bot. The content might be inflammatory, aggressive, or misleading. For data and statistics, the sources might be missing or refer to other posts that belong to suspicious accounts. If there are links to online news sources, you should also check the authenticity of those sources. Last but not least, if the writing style doesn’t sound natural, this might indicate that the content of the post has been automatically generated.
Identifying bad bots can be tricky, and many aspects need to be considered. Not all automated accounts are bad, and it’s up to us to figure out which bots are harmful and develop a critical approach to social media content. You might also be interested in reading our tips for identifying misinformation on social media.
13 October 2020 by Diana Plutis
8 hours ago Known in the early days of the internet as “software robots”, bots are software applications that perform automated tasks. The internet as we know it today has been shaped by bots.
Good bots vs. bad bots
Web crawlers or spiders browse web pages and index content so that search engines can provide relevant results for our searches. Chatbots are programmed to interact with us in real-time to provide information. Game bots act as players in multi-player online games to make it more entertaining. Some bots even get creative and write poetry or compose music. But there are also bad bots, and, unfortunately, the number of bad bots is growing.
In 2019, bad bots accounted for 24.1% of all internet traffic, according to Imperva’s Bad Bot Report. Bad bots spread spam messages and malware, steal login credentials and sensitive data, conduct denial-of-service attacks, and disseminate disinformation on social media, among others. The bad social bots that infiltrate social networks have been raising concerns in the past years as they became involved in social and political discussions. Social media bots: threats and challenges
Bad social media bots are used to create fake profiles on social media and generate likes and followers. But they become truly dangerous when they mimic the behavior of real users and actively engage in public discourse. The prime example of the danger posed by such bots is the 2016 U.S. presidential election. In the aftermath of the Cambridge Analytica scandal, more social media users became aware of how politically partisan groups can use social media to spread disinformation and fake news. Bots played a major role in the spread of disinformation. Twitter identified 2,752 bot accounts linked to the Russian Internet Research Agency, one of the main players in the 2016 disinformation campaign, plus 36K Russian bots, according to a report by Talos Intelligence.
Statistics provided by TheNextWeb reveal that Facebook deactivated 694 million fake accounts in 2017, and the number increased over the years, with more than 1 billion fake accounts being shut down each year. Behind fake accounts lie people with malicious intent or social bots programmed. It is estimated that 5% of all monthly active accounts are fake.
There is a growing concern surrounding social media, especially in the wake of the upcoming U.S. elections. A survey conducted by Avira and Opinion Matters found that only 24% of Americans believe the 2020 elections will be “free and fair”, with 50% seeing misinformation on social media as the main interference in upcoming elections.
While tech platforms are developing machine learning models to identify fake accounts and enforcing stricter rules on posting content related to social and political issues, there are also steps you can take to protect yourself from bad social bots.
Tips for spotting a bot on social media
Profile information
The profile picture is one of the first things that people notice on social media. Not having a profile picture – or having a generic one depicting a landscape or a cute puppy – is seen as suspicious. The same goes for account names that use numbers instead of names and the absence of location data. However, these are not clear signs that an account is fake. It might belong to a person concerned about personal privacy and avoids using personal photos or prefers the shield of anonymity on social media. Twitter points out that it’s important to look at “the holistic behavior of an account, not just whether it’s automated or not.” To figure out whether you are dealing with a bad bot, you need to go beyond the first impression and look in detail at the activity of the account, its network, and content.
Activity
Bots usually post very frequently throughout the day. Their daily activity is more intense than that of an average social media user. However, oftentimes, the posts are not original: bots are created to amplify the messages that their creators want to disseminate, so they will like and share a lot of posts without expressing an opinion on the content. Their job is to spread the message as quickly as possible and get it trending.
On Twitter, bots often use many hashtags in their posts to get the hashtags trending. Using a lot of hashtags, sometimes not related to the content of the post, is a sign that the post might belong to a bot. Watch out for hashtags used in a spammy manner and check whether the hashtags are used by other accounts that you know and trust.
The pattern of activity is also an important indicator of the account’s authenticity. If an account is active on specific days or constantly posts at the same time, it’s probably automated. Not all automated accounts are bad, of course. But if the activity is related to topics that cause dissent and the account has activity only during specific periods – for example, during the election season – it is a sign that its posts need to be carefully considered.
Network
Bots created with the purpose of artificially amplifying a post or tweet are oftentimes part of a network – a so-called botnet. They are programmed to act in a similar way and follow the same topics and hashtags. You can check the number of followers/friends and whether the accounts in the network seem real. For example, a generic Facebook profile created not long ago, which is following thousands of people but has only a few hundred followers, is suspicious to say the least.
You should also be careful with friend requests on Facebook. Bots try to befriend people to grow their network. If you don’t know the person, you have no friends in common, and the friend request does not include any personalized message, it’s best to decline the friend request.
Content quality
There are several signs that an account represents a bad bot. The content might be inflammatory, aggressive, or misleading. For data and statistics, the sources might be missing or refer to other posts that belong to suspicious accounts. If there are links to online news sources, you should also check the authenticity of those sources. Last but not least, if the writing style doesn’t sound natural, this might indicate that the content of the post has been automatically generated.
Identifying bad bots can be tricky, and many aspects need to be considered. Not all automated accounts are bad, and it’s up to us to figure out which bots are harmful and develop a critical approach to social media content. You might also be interested in reading our tips for identifying misinformation on social media.