Social engineering and the role of business

Social engineering and the role of business

In the context of security, social engineering refers to the act of tricking a person into handing over financial, sensitive or security details by impersonating someone the victim might normally expect to deal with. This differs from identity theft, in that the goal is to get the victim to handover useful information that might be used for a variety of scenarios. It might lead to identity theft against the victim, or it might be used instead to access the victim’s place of work with their security credentials.

Computer threat / Phishing attack computer system

To understand social engineering, consider a fictional person, Jim, who works for Widgets Incorporated. Like most modern employees, Jim has a LinkedIn profile that cites his current workplace and his role there, which happens to be Finance Officer. One afternoon towards the end of the month, when he is a little harried working on orders that are coming in from last minute deals, Jim gets a call on his desk phone. Caller ID shows that it’s coming from another Widgets Incorporated phone number.

When he answers, someone from IT tells him that there’s an urgent issue with the bookings system and that’s going to trigger potential password resets later in the evening. To save him some time, IT are getting everyone’s passwords so they can manually reset their passwords to the same, should the issue occur. Oh, and could he just confirm what his account is for the bookings system? They don’t want to reset the wrong password by mistake.

Naturally, with it being end of month, Jim doesn’t want to come in the next day to find his account his locked out. Sometimes it takes hours to get through to IT to fix problems, and that’ll put him really behind on his work on the last day of the month. So Jim gives his password, confirms his login ID, thanks the IT person and hangs up so he can get on with his work.

The next day, Jim gets into the office, logs into his desktop, then successfully logs into the bookings system. IT have obviously done their job, because his password still worked. It’s the sort of situation that IT departments try to warn users about.

In fact, Jim was never contacted by his IT department. His IT department has a strict security policy requiring that a user never gives his or her password out, so IT would have never called and asked Jim for his password.

Instead, someone using LinkedIn searched for people working in finance at Widgets Incorporated. That person then called into the main reception for Widgets Incorporated and asked to speak to Jim in Finance. When the receptionist asked “Which Jim?”, the person correctly gave Jim’s full name, and added, “He’s a Finance Officer”.

With the call transferred from reception, caller ID on Jim’s desk phone showed it as being an internal number. Jim doesn’t memorise all the phone numbers at work of course – who does these days? – and instead just recognised the first five digits of the phone number to match phone numbers for Widgets Incorporated.

That’s a social engineering attack. Unbeknownst to Jim, he’s provided someone outside his company his username and password for at least one system, and possibly more, given the tendency of most users to overlap passwords between systems they access at work. Social engineering, in this example, is a combination of specific activities – pretexting1, and spear phishing2.

Social engineering attacks are frequently the first step towards a more significant security breach, either personally or for a business – within a business, a social engineering attack might allow a hacker past the first line of defence. From there, the credentials received may not allow them full access, but it may grant them enough access to discover better access credentials, or insecure systems.

Most security organisations suggest that the entry vectors for most security breaches is created by employees themselves – fallible humans who can be tricked. From IBM in 20143:

What is fascinating—and disheartening—is that over 95 percent of all incidents investigated recognize “human error” as a contributing factor. The most commonly recorded form of human errors include system misconfiguration, poor patch management, use of default user names and passwords or easy-to-guess passwords, lost laptops or mobile devices, and disclosure of regulated information via use of an incorrect email address. The most prevalent contributing human error? “Double clicking” on an infected attachment or unsafe URL.

Cybercrime is, indeed, big business, and social engineering style attacks cost businesses yearly. The Australian Government “Stay Smart Online” website statistics from 2015 showed that the average cost of different types of attacks were as follows (AUD)4:

Average costs of attack. Denial of service $180,458. Web-based attacks $79,380. Malicious insider, $177,834. Malicious code $105,223. Phishing and social engineering, $23,209. Etc.

It is the personal security breaches that can be brought on by a social engineering attack that we will consider today, and specifically, the behaviour of many organisations, including financial and telecommunications, that encourage customers to be lax in their concern about receiving random calls asking them for information.

Many people are aware of the ongoing scams about computer problems – they’ll answer the phone and someone will claim to be from Microsoft and says there’s a problem with their Windows system, and would they like help to resolve it. The calls are made in such high volumes that inevitably the scammers will snag some hapless person who trusts the caller and hacks themselves as a consequence.

Many children, growing up, would have heard the term “Do as I say, not as I do”, or a variant thereof, by a parent or another adult. It is seemingly this attitude that many businesses adopt when engaging in unanticipated interactions with their customers.

The process essentially goes like this: a customer gets a random call from their financial institution, telecommunications company – or indeed worse, an affiliated company that has been provided the customer details. The call is technically legitimate: it’s “the real deal”, from either the company directly, or an affiliate company it has an agreement with. However, the process has to start with formal customer identification. “Before I can continue, I just need to confirm your date of birth”, or “Before I can continue, I need to confirm your date of birth and account number”. Effectively, a “challenge/identify/response” situation that has been started without the customer triggering it.

This is a serious security hazard. So much so that security experts will advise people in such situations to refuse to provide details, regardless of how genuine the request is. After all, when social engineering works, it does so for the sole reason that it has been done well enough that it seemed genuine to the person targeted. A user post on StackExchange Security highlights the core of the issue5:

Several times I get a phone call from a company- my bank, utility companies etc. Many times they are just cold calling me, but once or twice they were calling for legitimate reasons (ie, something to do with my account).

The problem is, all these companies ask you to confirm your personal details, like date of birth. Now I have no way of knowing if the person calling me is the real company, or some phisher (because even if the call isn’t from a blocked number, it’s just a number and I have no way of knowing who owns it).

Usually, when they ask me to ask for personal details to prove my identity, I tell them since they called, they should prove their identity.

At this point they usually get irate and warn me they cannot go ahead for security reasons. Now I don’t want to miss out on important calls, but neither do I want to give out my personal info to anyone who manages to find my phone number.

Such is the problem in such situations. An astute user, receiving an unanticipated call from someone claiming to be from the a company they deal with, will challenge the legitimacy of the call; ironically, usually the call centres are not equipped to handle such circumstances: they may become ‘irate’ as per the details in the above post, or may be unable to provide an incoming call number that the customer can use to (a) verify the legitimacy of the request and (b) call back in on to discuss securely. Many recipients of such calls, of course, will simply proceed with the call, assuming it is legitimate.

When challenged on such behaviour by a Guardian reader, HSBC in 2004 responded6:

When we phone our customers we do need to ask security questions as we need to establish that we are speaking to the right person and therefore do not divulge any confidential information to somebody else, or do not take any instructions concerning a customer account from somebody who isn’t the customer.

That’s a fair statement, of course. When a company engages in a discussion with a person regarding account or potentially sensitive information, they should of course verify that the person they are engaging with is the person whose details are being discussed7.

It doesn’t solve the problem under discussion, though – it is essentially at the heart of the problem: is it ethical for businesses to contact you and request personal information?

There are some circumstances where it is essential, of course, that a business contacts its customers. Credit card fraud is an ongoing problem – fraud increased $53,000,000, to $534,000,000, between 2015 and 2016 in Australia alone8. Detecting credit card fraud in real-time is big business, and financial institutions now employ a variety of mechanisms to not only detect potential card fraud, but also notify customers as quickly as possible – typically through phone calls or SMS. Yet, there is no small amount of irony (and perhaps even hypocrisy) in a bank or financial institution calling a cold calling a customer and conducting challenge/response questions to inform the customer they are being defrauded.

Even as many businesses conduct ongoing cyber security education campaigns with their staff – some even requiring it as mandatory quarterly training – one might argue there is insufficient attention to developing processes whereby legitimate business-initiated contact does not, to all intents and purposes, resemble a very careful social engineering attack on the customer.

It would be perhaps spurious to suggest such issues cannot be overcome, should organisations be so inclined to acknowledge their part to play in this aspect of customer security. Any of the following might for instance allow a modicum of reciprocal challenge/response:

  • Contacting the customer, stating there is a requirement to communicate, then asking the customer to call back via a verifiable phone number
  • Similar to the above, sending an SMS to a customer and asking they contact via a similar verifiable path9
  • Having an agreed challenge/response question that the customer must ask. (“I’m calling from <bank>. Can you please ask your challenge question to confirm that I am authorised to contact you?”)

Increasing adoption of mobile technology would potentially increase the options available in such circumstances. Some banking institutions for instance provide customers with “shield” applications that can be used to generate one-time codes to verify larger transactions10 Similar business supplied applications could be used to enable the customer to challenge a caller – when receiving a call, the customer would be asked to launch the verification application, then the contacting agent would recite a one-time code that must match the code displayed on the application.

One might argue there are several challenges to any of the above options:

  • It increases the possibility that a hoped-for customer engagement will not take place – the customer may choose to return the contact, after all.
  • If the challenge/response is made optional, customers may not choose to initiate it, thereby retaining the current poor security option.
  • Customers may forget or not have available to them at the time of contact the counter-challenge information required.

In situations where a business needs to contact a customer in an emergency – e.g., when a bank detects credit card fraud – it be argued that the speed in which contact is achieved, and the potential blocking of further fraudulent activities would outweigh the ethically ambiguous approach of requiring customers to be wary of social engineering whilst engaging in behaviour that resembles social engineering. This effectively becomes a micro-utilitarianism evaluation: the happiness of the parties involved (the financial institution, the customer, and the person engaging in fraud) is maximised by contacting the customer as quickly as possible11. Yet, businesses do not contact customers just for emergency reasons, and invariably when they do cold-contact customers, they still engage in a challenge/response system that is entirely weighted towards the customer accepting social engineering – more often than not for the sake of a sales call: “Would you like a new service from us?”, “We think we can optimise your bill”, “Let’s talk about renewing your contract”, etc.

While there are pros and cons to any challenge/response system, there seems to be little interest in businesses to provide suitable reciprocal verification processes – instead, the approach taken is more akin to say, an insurance company asking their customers to leave their front doors unlocked, just in case agents from the company needed to come in and have a chat with them about changing their insurance options.

Until businesses realise, and act on this, they will continue to encourage customers – and by extension, their own staff – to fall prey to social engineering. While all businesses have obligations to their shareholders to improve or at least preserve profit margins, the obligations to customer security ought not be obviated so readily just because it results in a less convenient means of engagement. While it might be desirable for this ethical dilemma to be addressed through industry best practices, or improved social awareness, it would seem equally likely improvements on this front will only be driven from government legislation.

Footnotes

  1. Pretending you’re someone else
  2. Targeting of individuals, typically customised through an understanding of that individual, increasing the personalisation of the attack
  3. IBM Security Services 2014 Cyber Security Intelligence Index, IBM Global Technology Services
  4. Infographic: The cost of Cybercrime to Australia (2015), Stay Smart Online
  5. Question, ‘How do I deal with companies that call and ask for personal information?’ on Information Security StackExchange
  6. I need to ask you a few security questions…, 17 September 2004, The Guardian
  7. Or a verified proxy
  8. Australian Payments Fraud 2017, Jan-Dec 2016 Data, Australian Payments Network
  9. For instance, a banking customer might be asked to call back in using the customer service phone number to be found on the back of any of their credit or savings account cards.
  10. For example, a customer attempting to perform a large transfer would be required to launch the shield application, generate a one-time code, then enter that code in as authorisation.
  11. Recall that utilitarianism can be referred to as the ‘greatest happiness’ principle. In such a situation, increasing, or at least limiting the damage to happiness for 2 out of 3 parties would be the appropriate activity.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: