The Old FUD – Fear Uncertainty Doubt

The FUD techniques are certain to come up again and again as they are effective (to a degree).

FUD is a marketing technique to sow fear into cost conscious customers that are thinking of going to a competitor. Pushing safety in numbers and other uncertainty creates FUD in the mind of potential customers. Thus it is not so easy to go with a competitor unless one is armed with knowledge.

the first FUD campaign happened when IBM mainframes finally receive some competition with Amdahl mainframe company.

Above picture is an Amdahl mainframe (with red-hued panels instead of the familiar IBM blue). Newcastle university in picture)

So obviously Newcastle University did not pay attention to FUD by IBM

Why do I mention this FUD business? Because it is an old tactic and is being used by competitor Firewalls in the security firewall market space.  Palo Alto is muscling into a larger marketshare (due to developing and running a good firewall operation)

So the competitors have developed a youtube video 

First one selects an exploit 

Then configure the test environment which means setting up what kind of attack will be ‘tested’.

then conveniently one can Run the attack.

So the competitor ran the Evader software with specific evasion techniques to see if they can evade the Palo Alto firewall they have set up so they can evade it.

 

This is exactly why FUD works, make future Palo Alto customers(or current ones) see that they can have a firewall that is not bullet proof.

Yes we know that – no firewall is bulletproof no matter how well you configure it, there is always one item that is missed over the days and years. Since we are assaulted day after day and all the hackers have to do is get one attack to work. We have to be cognizant to not be complacent and invincible (it will not happen to me attitude).

It is true we have better firewalls and the only thing to combat FUD no matter your industry is massive amounts of information, thus knowing what you have backwards and forwards.

Contact us to review your environment so that you don’t worry about FUD.

To Measure Risk, Measure Impact : Major Threats and Effects

To Measure Risk means to measure impact and threats(likelihood)

(R=L*I) Risk = Likelihood * Impact

 

So what does that mean? What are the threats and their effects to your environment? Answering this will give the true impact of the problem figuring out what risk one really has.

(Above image was copied from @ipfconline1 twitter images)

So let’s assume these are the major threats and Major concerns (from image)

  • Unauthorized Access  53%
  • Hijacking Accounts  44%
  • Insecure interfaces / APIs  39%
  • External sharing of data

Major Concerns

  • Data Loss/leakage  49%
  • Data Privacy  46%
  • Confidentiality  42%
  • Legal and regulatory compliance   39%

The threat is one portion of risk, the impact is another.

The idea is to view all of the threats coming at you and review where you should spend your time.

The problem with this methodology is one has to have a decent understanding of the impact and likelihood of various threats. Some of these items need to be also taken into context.

If you have 100 computers and they are all running Windows Operating systems (different versions 7,8,Server, 10) then a threat to your Windows base for MS17-10 is not as dangerous for all computers.

But what if a virus/trojan attacked and affected 20 computers?  Now the impact would be higher. So the Risk to your organization is higher from a relatively minor Microsoft vulnerability.

So one thing you will find is that even minor vulnerabilities can grow into major problems. So the potential effect of an exploited vulnerability  is the issue. Every month new patches are released and at the same time criminal hackers are trying to exploit the patch exploitability.

Unfortunately every vulnerability has an attack timeline.

Here is the crux of the issue, what is the impact for each separate vulnerability to your environment? As criminals develop better attacks you have to keep the threats in mind and do proper patching so as to defend your network.


By performing an audit of your environment and  reviewing impacts and likelihood you will hopefully be able to evaluate your risk properly.

Contact Us to help you with this process.

What is Real Story on Default Passwords?

Is it really as bad as some say? People are not changing default passwords and thus hackers control their machines if remote access is enabled in some way.

i think it is VERY BAD – as people are really looking for ways to make bad decisions:

https://superuser.com/questions/106917/remote-desktop-without-a-password

\

My apologies to this person who maybe innocently was trying to make some administration easier for him, but the lack of security knowledge is apparent. One should NOT even think of creating a scenario where there is a blank password on a machine (ever – even worse for remote access).

If this machine was connected to a Credit Card Machine now you are in PCI compliance violation.

Ok, we know not to have default or blank passwords…

Or is it that people don’t need to change the default password as the system is not remote accessible?

Even then the default password should be changed, because physical access needs to thought of, and is not 100% foolproof.

Or is it that people think the system is not remote accessible but it really is in some way?

The last scenario may be likely if the level of sophistication is not good.

And the hackers are looking for these machines as a post from last year notes the Verizon data breach Investigations Report  http://oversitesentry.com/why-are-there-cyber-security-issues/

Mentions that Remote Command Execution was found on scanned machines more than at other times.

Human error is one of the main reasons for security failures. in 2014 IBM ‘s Cyber Security Intelligence index notes “95% of all security incidents involve human error”

 

So how does a stakeholder (the board, CEO, exec team) make sure that human error is minimized (as it will likely never be 100% gone). It is to obvious to most: Bring in outside help to get a second or third opinion, and perform tests to see where human error can be minimized.  The CISA (Certified Information Systems auditor) would review the potential risks and set up  an audit to methodically find security issues.

Contact us to discuss

2nd Quarter Almost Over – Time to Reassess and Plan

There seem to be a few posts doing a bit of reflection:

Internet Storm Center:  “An occasional Look in the Rear View Mirror”, discusses that every so often look into what you can do to see if anything can be retired.

At year end we look over the year and look into next year for new goals etc.

So what will happen in 3Q/4Q? Will we  develop new and better procedures, guidelines and other items to improve our organizations?

With a couple of weeks left in the quarter it would be great to review and reassess any plans you had and redo if necessary.

Dark Reading: “Why Compromised Identities Are IT’s Fault”

Yes it is IT’s fault because IT has to do a better job policing itself where it matters. But since it is hard to police “yourself” an outside entity should do it.

Dark reading claims:

“Before an organization can fight identity-based attacks, it must survive its own internal battle between IT and security. There are two battle fronts. The first: identity access management (IAM) typically comes under the control of the CIO, where more access is better than less to enable business processes at customer speed, even more so for mobility and cloud projects. IAM is not managed by the CISO, even though identity-based risks are at the core of security issues that keep CISOs up at night. This first front can be summarized as the CIO and CISO divide.”

So somewhere along the lines security lost a small battle (or a big one). In an Audit program (or framework) the outside entity  is independent and ultimately reports to accountable people (the board or exec team).

It does not have to be a fight… errr discussion between CISO and CIO and whether it is productivity or security that should ‘win’.

ISACA framework(ITAF) is an audit guideline, and the basics are the following:

  1. Plan the audit
  2. Risk assessment of the plan
  3. Audit IT functions under Supervision (test the network, servers, software function and more)
  4. Document audit function
  5. Create reportsout of the tests – signifying the ineffective controls, control deficiencies, and what these problems would cause for the business
  6. Evidence of the test results and conclusions must be presented.
  7. May have to use other experts to find specific issues(like a DBA (Data Base Admin) for example)
  8. Note Irregularity or illegal acts and reduce risks to an acceptable level

One of the tenets of an Auditor is being ethical in creating the audit tests. The reason for this is if one does not have expertise in a section of IT that needs audit work, then an expert in that field must be brought in. For example if the company has an agile  programming project and the auditor does not understand agile programming techniques, it means the auditor must get an agile programming expert to review the project.

 

So the ethics of the auditor is very important, as knowing when to ask for help is good, as well as  having the good sense of when to stop. Knowing to do the right thing is important.

contact  us to review your situation.

OneLogin Security Failure Spotlights Even the “Experts” Get Hacked

So what to make of the OneLogin Security Incident?

So what to do when even the “experts” get hacked and potentially have lost confidence and your data.

Unfortunately in this case it is usernames and passwords (potentially), as it is not obvious what was removed or accessed, as a lot of data is encrypted at OneLogin.

The function of onelogin is of course to have a secure method of logging into your environment with one password/ authentication method.

so what is a user and an IT department to do with password management?

Don’t do what Sony did, and store your passwords in an excel file.

compliance standards require password management to be with a minimum set of parameters:

  1. at least a certain size (~10 or more) letters with a certain complexity (numbers and letters/specials)
  2. set lockout duration (i.e.) with an incorrect entry lock the access for 30 min.
  3. inactivity idleness lockouts
  4. unique ids and passwords (do not reuse)
  5. Do not reuse passwords across multiple entities

 

So why did you set up a oneLogin system? To make it easier to access a variety of platforms and networks.  We did not expect for oneLogin to have a security problem which causes the very act of logging in securely to fail, as now the potential is there that the hackers have your userid and password, and since you have made it easier to access your network the whole network is accessible to hackers.

This is usually an acceptable risk for the most part, but if you had a computer system and database that is especially problematic if hacked I would set up a seperate authentication from the OneLogin setup, even though this makes things more difficult.

As I have discussed before Perfect security is not possible. Especially if you also want functionality.

 

The real question is what kind of Russian Roulette did you want to play for your business?

The game is this… (it depends on your situation of course) every day you are shooting a X barrel gun and if it actually has a bullet then a security event occurred. So the idea is to have a very large gun, with lots of barrels (like 500) so then at least the chance is low for a security event.

The funny thing with probability comes into being.

In a true 1 in 500 event, you may never actually hit it. The odds are that you will hit it once every 2 years or so. But we have another problem, how do we accurately represent our risk of the organization? How big is your “risk gun”?

I made these 1000 gun barrels units as well as a 500 gun barrel to try and represent what a physical Security risk gun would look like.

So Since Risk = Impact * Likelihood

The higher the impact is therefore your risk is higher.

If the impact is high risk is higher than where the impact is low. Now we get into the subjective gauge of likelihood. Here is where this setting can be fluid and can create many problems as circumstances change. As new malware is introduced and machines are not patched or other situations.

So RISK becomes a moving target that has to be assessed by an independent person so as to approximate it as best as possible under the circumstances. Here is where you figure out is it a risk of 1 in 1000(low) or 1 in 500(not low – but higher)

or 1 in 300 (medium) or a 1 in 150 (high) for each day.

So when you have a Single sign on application it better be checked for security otherwise the risk is greater since the impact is great.

Contact US to review.