LinkedIn is preparing to roll out new security features to protect users from scammers posing as fake corporate executives and job recruiters on the platform.
LinkedIn is particularly useful for fraudsters and even spies because LinkedIn profiles can contain sensitive details, including the person’s current job, employment history, and a way to directly reach out to them.
Over the years, hackers and scammers have also been spotted exploiting LinkedIn to send out fake job offers as a way to trick victims into installing malware or dupe them into handing over their personal data. Earlier this month, security journalist Brian Krebs reported(Opens in a new window) that a flood of fake LinkedIn profiles of people claiming to be consultants and chief information security officers had popped up, likely for malicious purposes.
In response, Linkedin plans on introducing some changes over the next several weeks that promise to make it easier for users to detect suspected scam activity. One of them includes adding a new “About this profile” feature, which will show you when a LinkedIn user created the profile, and if it’s been verified with a phone or a work email.
(Credit: LinkedIn)
“We hope that viewing this information will help you make informed decisions, such as when you are deciding whether to accept a connection request or reply to a message,” LinkedIn VP Oscar Rodriguez wrote(Opens in a new window) in a blog post.
The “About this profile” feature arrives this week on each user’s profile page and can be accessed via the three-dot menu. The company also plans on adding it to LinkedIn invitations and messages. To verify work emails, LinkedIn is starting with a limited number of companies, before expanding it over time.
The other change involves detecting AI-generated images on LinkedIn profiles pages. These AI-generated “deepfake” images can produce headshots of seemingly real, but fictitious people, and have quickly become a red flag that a LinkedIn account is a scam.
Recommended by Our Editors
An AI-generated face.
(Credit: thispersondoesnotexist.com)
According to Rodriguez, the company is now using its own AI-based system to detect these deepfakes. It works by spotting “subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric analyses,” he said.
In addition, the company is working on a way to alert users about suspicious activity occurring through their LinkedIn personal messages.
(Credit: LinkedIn)
“We may warn you about messages that ask you to take the conversation to another platform because that can be a sign of a scam. These warnings will also give you the choice to report the content without letting the sender know,” Rodriguez said.
Like What You’re Reading?
Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Hits: 0