Wednesday, December 28, 2016

PHPMailer vulnerability

I blogged yesterday about the release of the PHPMailer vulnerability CVE 2016-10033 and how it was unlikely to be exploited in a default release of Joomla.  Now there's a POC released, but I still haven't changed my position on this.

I'm sure that there are vulnerable applications out there. I also always recommend that people patch as soon as possible when patches are available (pending testing). But this one seems over hyped to me. Joomla! includes PHPMailer as a library, but doesn't use it in any way that allows for exploitation. SugarCRM uses PHPMailer, but it isn't immediately clear to me whether it is used in a way that allows the vulnerability to be triggered. Again, you should patch, but don't burn down the house to do it unless you know you are vulnerable.

As an aside, the default POC script (which every skiddie out there will use without modification) uses the string "zXJpHSq4mNy35tHe" as a content boundary. You can use this for your IDS to find attackers on the wire using the default POC script.

Most of this content was cross posted from my Peerlyst page.

Tuesday, December 27, 2016

New Joomla vulnerability - TL;DR you're probably okay

There's a new vulnerability in the core Joomla distribution, this time in the PHPMailer plugin.  Successful exploitation results in remote code execution (RCE) and normally I'd be shouting "patch now" from the rooftops.  But in this case, you're probably okay.

The vulnerability is in the "From" email parameter.  The core distribution only uses an API that does not allow the "From" email to be modified.  Joomla advises some other plugins may use the PHPMailer plugin in ways that allow the "From" address to be modified in ways that might result in RCE.  However they stop short of specifying any plugins that are vulnerable.  Do you know of any plugins vulnerable?  Hit me up in the comments.

Friday, December 23, 2016

Rejects v1

As many of you know, I regularly contribute to SANS NewsBites.  It's an outstanding email newsletter that normally is published twice weekly.  Not everything I contribute gets published though.  Sometimes things get chopped by the editors.  I have a pretty good idea of what doesn't follow SANS' editorial guidelines and try not to contribute those thoughts.  For the rest of it though, I decided I'm letting a lot of good content I've already written go to waste and decided I'd start publishing them here under the heading "rejects."  This blog series is not affiliated with SANS in any way and does not reflect their views.  Also, I am not in any way knocking NewsBites for not publishing everything I send in.  It's a tremendously valuable newsletter - one that I used myself throughout the years and I'm honored to be a contributor now.

Regarding a story about how the number of claims against cyber insurance are on the rise:
In my practice, I work with a number of organizations that have great confusion about what is an isn't covered by their cyber insurance policies.  Don't assume anything here, the stakes are far too high.  I always recommend organizations perform tabletop exercises to determine if their coverage would be sufficient for events reported in the media and adjust their risk models (and perhaps coverage) to suit.
Regarding a story about how the US military "was almost brought to its knees" by Russian hackers:
The media has blown this out of proportion, saying that it could "bring the US military to its knees."  Those who understand the intel gain/loss model know that no such action is likely.  Russia could use this access to continue to gather information indefinitely until detected or perform a very temporary disruptive event.  Attackers most often have far more capability than they exercise during an intrusion.
That's all I have for this week.  Hopefully this adds value in some way.

Thursday, December 22, 2016

South Carolina wants porn filters installed on new computers

I so wish this was a joke.  Unfortunately, it's serious.  Take a minute and read the article.

This is a great example of legislators wanting to do something positive, but doing something very negative instead.  The desire here is to limit human trafficking (a noble goal) but the method is through porn filters on computers sold in the state.  There are so many things wrong with this, I don't even know where to begin.  Obviously there's no causal relationship to speak of.  Porn doesn't cause human trafficking or vice versa. So there's that important tidbit.

Then there's the Constitutional aspects of the proposed legislation.  It's unlikely this law would ever survive a Constitutional challenge, and if that's the case then passing it just takes resources away from the state (resources used in a likely futile attempt to uphold the legislation in court could better be spent elsewhere).

But my real concern is that the impact to computer security would likely be significantly negative.  To be effective, the porn filters would have to integrate with browsers and would be unlikely to meet the same security standards as other software.  Then there's the issue of telemetry and big brother, securely updating block lists, etc. Further, the proposal allows end users to pay to remove the porn filter. I can already see the underground "free porn filter remover" economy popping up, similar to illicit keygen programs, most laced with malware.

I have no love of porn (I see way too much of it in forensics cases) and don't live in the state of SC.  But I bring these thoughts forward because far too often we experience a disconnect between intent and reality in infosec, particularly when legislators get involved.  Take time over the holidays to educate your family members that many pleas of "save the children" or "stop human trafficking" have negative implications for infosec.

Tuesday, December 20, 2016

Encryption of healthcare SAN/NAS

I ran this poll a couple of weeks ago on Twitter.  I was looking to back up a theory of mine with some data, however bad my sample set is (people who follow me on Twitter).  In the end, I got some data, but I'm not sure how valid it is.  


The problem with this poll is that even though it got 53 replies (which I'm super thankful for), I don't know how many of these respondents really work in healthcare.  People also have a tendency to tell you what they think you want to hear.  I think that's going on here too.  People know that HIPAA requires encryption for data in transit and portable devices.  I think they are extending that to the SAN/NAS example here.

I can't imagine many likely scenarios where you would invest money in a SAN/NAS (where performance is key) and then lose performance (money) on disk encryption.  Full disk encryption protects primarily against physical attacks and your SAN/NAS should be in a secure environment.

This was cross posted from my Peerlyst account.  I'm really interested in people's perspectives on this, but I've had to largely disable comments in the blog due to blog spam.  If you have something to contribute, hop on over to Peerlyst and comment there. I'm really interested in perspectives on this issue.

Saturday, December 17, 2016

Infosec reporting and the problem of reaching your audience

If you've ever taken a course with me at SANS you know how big I am on reporting and getting that right.  You can be the best in the world at the technical aspects of infosec, but none of that matters if you can't write.  I regularly tell people to shoot for maximum 7th grade reading level in their executive summaries.  Your executives aren't stupid (most of them) but you shouldn't make your writing hard to read or you're less likely to get engagement.  This great article I found doesn't cover writing for infosec explicitly, but really hammers home how many people read at below a 9th grade level.

Read and heed. The people in infosec making the most cash aren't always the smartest or the most technical. They're the ones that can communicate effectively - and that invariably involves writing coherently.

This was cross posted from my #Peerlyst account.  If you haven't yet joined the Peerlyst community, I think it's a great source of knowledge for the community. Go sign up.

Tuesday, December 13, 2016

Bad correlations in IR? Maybe no reverse engineers is the problem?

Correlation isn't the same thing as causation.  Forensics professionals often seem to forget that when they deal with incident data.  Just because an event occurred and malware was found on a machine that could have caused the event doesn't mean the malware caused the event.  Is there a correlation? Sure.  Is this enough to establish causation, nope.

I semi-regularly tweet images of spurious correlations to remind my fellow DFIR brethren that correlation is not the same as causation.  These are so ridiculous that they paint a powerful picture that correlation and causation could not be the same thing.

But why do we ever assert that correlation and causation are the same?  I think the root of this is a lack of knowledge.  This in turn leads to logical fallacies in our thinking.  We can fix this correlation/causation confusion by learning more about incident response.  How do we get more data? What data is actually useful in investigating the incident?  If I wanted to find out more, where should I look?  What does normal look like in my environment?

One of my biggest suggestions for overcoming these issues in IR is to make sure you have access to a reverse engineer.  Using black magic and ritual sacrifices*, reverse engineers can help alleviate confusion about what capabilities a piece of malware actually has.  I frequently read in reports that "the malware is using code injection."  Why, I ask?  "Because we checked for normal persistence and didn't find anything."  This is obviously not a strong connection.  In fact, it's REALLY weak.  Absence of one thing is not proof of another.  Period.
* Often, reverse engineering involves neither of these things.

Reverse engineers can also benefit organizations by helping to fully decode malware C2.  I can't tell you the number of reportable HIPAA incidents I've seen staved off by knowing specifically what an attacker did (and exfiltrated) from a network.  Full packet capture is great, but it can't be fully used without good C2 analysis (which requires a reverse engineer).

I'll close my soapbox about reverse engineers and say that having a reverse engineer available is a game changer for many organizations.  Reverse engineering answers questions that no other tool or capability can.  If your team is growing and doesn't yet have enough work for a full time reverse engineer, you can always put one on retainer from a reputable firm.  If you need help, talk to Rendition Infosec, we have reverse engineers that would be happy to maximize your return on investment and change "we think this is what happened" to "we know this is what happened."

Monday, December 12, 2016

Disqualifying votes in Michigan - how would this play in PA?

I'm a little late on this, but figured I'd discuss the issue anyway.  I read a story that says that many of Clinton's votes in Michigan may be disqualified from the recount due to problems with the voting machines in precincts that heavily favor her.  The issue has to do with reconciling vote tallies with voter sign in logs.  The discrepancies in reconciliation have to do with old voting machines that may be faulty.

Interestingly, we only know of this because of the paper record that is generated in Michigan.  But Pennsylvania uses pure electronic voting.  How would this play there?  As I ponder the idea of auditing an e-voting machine, what would happen if malware were found on the machine?  Would you have to disqualify all of the votes?  Since most machines are air-gapped, what if malware was found on the machine that programs or reads the PCMCIA cards for the e-voting machines?  Do you disqualify all of the votes for the machines the infected computer came in contact with?

Yes, malware could technically change the paper backup used in many states too (as Cylance showed), but I'm more concerned about the Pennsylvania case since that's potentially going to be an issue sooner than later.

If finding malware on a machine invalidates votes, then the smartest way to hack an election is perhaps to compromise machines in the precincts where your opponent is heavily favored then trigger an audit.  I'm not recommending this, just suggesting it's the logical conclusion to a strategy of removing votes for malware. 

I don't have all the answers and I'm not trying to start trouble.  But I would urge you to contact your state legislator and ask them how your state will handle issues of malware found on voting machines or those used to tally votes.  If they don't have an answer, suggest that they sponsor legislation.  Practically any legislation on the matter is better than the court battles that will inevitably occur in a legislative vacuum.

Update: a US District Judge just ruled that a recount cannot be held in PA, saying it cannot be completed before votes must be certified. Judge also says it "borders on the irrational" to suspect hacking occurred in Pennsylvania.

Saturday, December 10, 2016

I'm a failure - (mis)adventures in CFP submissions

I love speaking at security conferences.  A good conference presentation goes beyond just sharing your data.  It's a true performance art. Edutainment if you will.  I've been a technical reviewer for submissions at a number of conferences as well.  I always submit to a CFP as though I were a reviewer thinking "is this a presentation I would like to see myself?"  If the answer is no, I don't submit it.

That being said, I'm always a little put out when I get rejected for a conference.  A bunch of reviewers looked at my work. My idea. My baby. And having judged it, they found it lacking. No matter how many times I've been through it or how I just know it will be different this time, I'm always put out.  Sometimes I have a feeling of impostor syndrome.  I always find myself wondering "why didn't they like me" or "why wasn't I good enough?"  Sometimes I think that the reviewers know a bunch of people have presented on this topic before me - they think I'm a fraud.... Thoughts (self destructive thoughts) like these happen every. single. time.

But then I quickly remember something I saw on an old "No Fear" tee shirt years ago:
100% of people who don't run the race never win
This is when I remember that I have to put myself out there to win.  I personally submit several proposals for every one that is accepted.  Sometimes when I get rejected from one conference, I submit the same paper to another conference with no edits and it gets accepted. Sometimes reviewers are helpful (like at DEFCON, thanks Nikita) with providing great feedback and I am able to modify my submissions to be better for the next conference.

When you submit to a CFP and aren't accepted, I think it's important to let others know that you've submitted, but were ultimately rejected.  I think this does two important things:

  1. It lets others know you are at least trying to give back to the community.
  2. It lets others know they are not alone in being rejected.

Most conferences receive twice the number of submissions they can accommodate, some receive even more than this.  They have to reject someone. In fact they have to reject a lot of someones.  But don't let this discourage you.  Keep submitting, keep polishing the submission, and most of all don't fear failure.  It's totally natural to feel bad with a rejection notice, but you have to brush yourself off and get back up again.

Why am I writing this now? I submitted two papers to Shmoocon this year, both before the early decision cutoff.  When the early decision came and went without me on the list, I felt bad about myself.  Then I got out of my slump and figured maybe I'd be accepted in the second round. Yesterday, I got two notifications.  The first said I was accepted to speak.  I was elated.  The second email ten minutes later said I was rejected.  I was totally deflated and wondered "what's wrong with me?"  Truth be told, if I never expected to be picked up for both talks.  I'm honestly happy I got accepted for one talk at all.  This will be my third time speaking at Shmoocon and it's an awesome conference.  But they didn't like my second talk.  They don't like my ideas. I'm a failure. A drink or two later I was celebrating being accepted for one talk and the pain from the rejection felt long gone.  If I'd been rejected for both, the sting likely would still be stronger.

Update: Someone reached out to me and said I should be happy one paper got accepted. I am. He said I should be grateful that I wasn't as he put it "totally rejected."  Again, I am.  For the record, I submitted three to RSA this year with a 100% rejection rate.  My point in explaining this was to note that you can feel rejection even when you've been accepted.  Again, if this helps you - great.  If it doesn't, then just forget I wrote it.

These destructive thought patterns are far too easy for us all to fall into.  I'm not writing this for your sympathy, I'm hoping that others can read this and realize "I am not alone - this is something that others go through."  If that's not you, I envy you and the control you exert over your emotions.  For the rest of you: you are normal, these thoughts are normal, don't give up, don't stop submitting, give back to your community.

I'll close by saying that I think security conferences are very important and so is speaking at them. My company Rendition Infosec sponsored several conferences this year and will continue to in 2017.  I also strongly encourage my employees (okay, it's technically coercion) to submit and speak at conferences.  Three members of the Rendition team (Edward McCabeBrandon McCrillis, and Michael Banks) spoke at multiple infosec conferences this year.  I try to coach them through the submission process to maximize their acceptance rates, but I suspect I'm putting them in a bad emotional state when they are rejected. For that, let me formally apologize.

Tuesday, December 6, 2016

New Linux privilege escalation vulnerability

There's a new Linux privilege escalation vulnerability (CVE-2016-8655) that will allow normal users to elevate to root. The bug is in the networking subsystem and relies on the attacker being able to create a raw socket with CAP_NET_RAW. In most Linux distributions, users can't do this unless unprivileged namespaces are enabled.

Red Hat notes that RHEL 5 and RHEL 6 are not impacted by the bug. RHEL 7 is, but not in it's default configuration since unprivileged namespaces are not enabled.

Multiple versions of Debian are listed as vulnerable.

There are also many Ubuntu builds that are vulnerable.

The researcher who found the bug (Philip Pettersson) notes that he discovered the bug by examining areas where memory is allocated in unprivileged namespaces.  Since these are a relatively new development in Linux, it might be that there are locations where developers didn't account for untrusted users having access to manipulate certain kernel structures.  Other such issues may exist in other areas of the code.

At Rendition Infosec we always recommend that clients minimize their exposure by applying the latest operating systems and software patches.  This bug also demonstrates another principle that we try to drive home with our clients: minimize your attack surface.  If you don't need it, don't enable it.  Minimizing attack surface is what keeps Red Hat 7 from being vulnerable in a default configuration.

Tuesday, November 29, 2016

Fractional voting - how did they get this so wrong?

As we all know, there's no shortage of hoaxes on the Internet.  After writing about election hacking the other day, someone responded to me that hacking has already been demonstrated and offered this YouTube video as proof.  The video was produced by Bev Harris from blackboxvoting.org.


The guy in the video who is supposedly a computer savvy professional explains that the reason he's sure the GEMS election reporting software is subject to hacking because votes can be counted in fractions.  The proof?  The database schema can support integer, single precision floating point, or double precision floating point for the vote counts. Sure, storing integers a as a float is stupid from a space perspective, but the fact that the schema allows for it isn't malicious by itself.

Do people take this seriously?  Um, unfortunately yes...


He shows in the video how he can write software to restrict voting to a particular percentage by using fractional votes. But there's a chicken and egg problem here - how do you get access to this data in the first place and is there an audit trail?  Also, sure you can do math with floating points given the access they demonstrate, but you could do it with integer votes too.  Further, the idea that the GEMS election tally system would allow for single and double precision floating point is a feature.  Some areas of the world do Cumulative Voting so this would be one way to store that data.

But besides common sense, how do we debunk something like this?  Well, examine the video at around 10:10.  The supposed computer expert explains single and double.  He explains that a double can use between one and two decimal places and a single can use between zero and one (or something to that effect).  He has no idea what these terms mean. Wikipedia has a better idea of what a double precision floating point number is.  But that's the problem.  Many will see this video and the supposed demonstration and be fooled because they have no idea what this "savant" is doing.

Monday, November 28, 2016

Kill chain mnemonic

Trying to learn the 7 phases of the Cyber Kill Chain(tm) but need a handy mnemonic?  This one comes courtesy of one of my SANS FOR578 students:

Real Women Date Engineers In Commando Armor*

Which stands for:
  • Recon
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • C2
  • Actions on Objectives

* I'm fairly certain this mnemonic is not true - getting a date while wearing commando armor seems highly unlikely

Sunday, November 27, 2016

DFIR in the election recount(s)

Regardless of your political position, the upcoming voting recounts and e-voting audits are almost certain to spurn some feelings in every American.  I personally don't have a dog in this fight, but I find the prospect of a post election audit of e-voting machines to be fascinating.


Every four years we criticize the security of the e-voting machines without actually doing anything about it. And security pundits talk about the risks. Some even demonstrate how the machines can be hacked (side note, I think the Cylance demo just days before the election was a reckless publicity stunt).  Despite all the talk, to my knowledge we've never had a post-election audit of the e-voting system. Now that's a possibility and I for one couldn't be happier.  I could even argue that with the US intelligence community openly blaming Russia for attacks, there's never been a better time to perform such an audit.

I'm most interested in the prospect of doing forensics on the voting machines and the computers that program, read, and report results from those machines. Many talk about how the voting machines are airgapped. But they all receive commands and ballots from some other machine on the network (many via PCMCIA cards). And let's not kid ourselves about the security of the machines used to program the ballots on the e-voting machines. Michigan  can't even get the lead out of the water in Flint. How much attention and budget do you think they've been paying to the computer security of their election commissions? I'd bet the money in my pocket that I can be on the controller of at least one election district machine before the week is over. Any competent nation state can do it too.

This week, I'll blog about some of the complications of the audit from a DFIR and CTI standpoint.

For now I think it's interesting to consider a more important point of attribution.  Suppose that the audit uncovers widespread compromise of e-voting machines or their controllers. What then?  Cyber attack attribution is difficult in the best of circumstances. But in this case we've telegraphed our intention to audit the systems and in doing so have given any potential adversary time to cover their tracks. As we regularly tell clients at Rendition Infosec, it's nearly impossible for an adversary to completely cover their tracks and make it appear that an intrusion never took place. But anti-forensics techniques definitely complicate attribution.

Do you have some thoughts on how attackers might have covered tracks? Unique DFIR or CTI angles I haven't considered? Hit me up on Twitter using the hashtag #DFIRrecount and get the conversation going.

Tuesday, November 22, 2016

Cover your badge, cover your papers, think before your photo op

Yesterday Kris Kobach met with the president elect about plans for the Department of Homeland Security.  The two posed for a photo after the meeting and that's where things get interesting.


Notice that although Kobach has a folio with him, he is carrying papers outside it. In today's world of high resolution photography, the words on the paper are clearly visible.


When we do penetration tests at Rendition Infosec, I find a tremendous amount of value in publicly available photographs.  Sometimes these are pictures of employee badges (yes, Rendition has a badge printer).  Other times photos reveal information about internal building layouts and other information that gives us an edge while pretexting in social engineering.  In any case, this is a great example to remind your employees to be careful when cameras are around.

Monday, November 21, 2016

Patch your ntpd servers

With many important things depending on time (think Kerberos), NTP is an important but often forgotten component of your network.  Because the NTP daemon for Linux is relatively lightweight, it's rare for bugs to pop up in such a common service.

But a new bug has been found in the NTP daemon.  It allows a single packet denial of service condition which can be exploited remotely without authentication.  To make things more interesting, there's a trivial proof of concept exploit available.  The exploit it CVE 2016-7434 and the exploit and advisory are linked here.

At Rendition Infosec we always recommend clients apply the latest patches.  But in this case, that may not be possible.  NTP runs on many embedded products and on many of those products it can't be patched.  Some network appliances have fallen out of support with vendors or vendors may simply be slow to provide a patch.  In those cases, restricting access to NTP from unauthorized locations may be your only recourse.  If your network isn't architected for defense in depth, maybe now is a good time to consider what changes you can make today that will make defense tomorrow easier.

Sunday, November 6, 2016

How forensics really works (a post for my mom)

Dear Mom,

Forensics doesn't work like you see on NCIS where Abbie does the impossible and does chip off and data recovery to get done with the case in time to make an arrest.  Malware reversing doesn't work like you saw in CSI: Cyber where you just drop the code in a hex editor and the exploits and malware are just colored red (on a green background).  Terrorists can't really take down the whole air traffic control network, but if they could, I wouldn't drive down the runway in a Porsche to download the software.

Forensics is technical and hard.  It doesn't happen in the span of a single hour long show.  While you've probably heard that nothing is EVER deleted, that's not really true.  The more time that goes by between a deletion and the investigation, the less likely I am to recover anything.  Think about that "The First 48 Hours" show you like so much.  Sure, most of the time police get called right after the murder and get a fresh crime scene.  But remember that episode where the police get called to a vacant lot months after the murder happened?  That murder scene was fresh once too, but not when the police got there.

Forensics is (usually) a lot like that episode.  Instead of getting a fresh crime scene, there was a thunderstorm that washed away some blood evidence. Some drunk college kid came by and urinated on the ground near the body.  A young girl saw something shiny in the lot while walking home and removed a shell casing because it was "pretty" and she wanted it for her dolly.  Three hoboes having an orgy came through and moved the body to use it as a mattress.  The crime scene is a mess.  Yeah, forensics is a lot like this crime scene - it's just a mess.

But for all the bad, forensics isn't all manual either.  I'm not individually looking through each file on the system looking for keywords in your documents.  I don't manually search through your browser history either.  There a number of sites I don't ever want to see your password for.  What investigative value could your saved password for Amazon have?  The only value I see is you accusing me of purchasing something on your behalf.  I have filters to remove this sort of private information.

And if you have tens or hundreds of thousands of files (say for instance emails) I'm going to use automated tools to reduce this number to something I can manually examine.  All the interns in the world won't be able to cull through a hundred thousand emails in a timely fashion.  I'm going to sort files into four bins:
  • Files that match a certain keyword of interest (blacklist)
  • Files that match a certain keyword you DON'T want to see (whitelist)
  • Files that contains words from the whitelist and the blacklist
  • Files that don't match anything
Depending on the parameters of my investigation, I might only look at those files that match the blacklist words.  In other cases, I'll also examine those that match both the blacklist and the whitelist. Often, this second category is investigated by another independent investigator since items on the whitelist are often very sensitive.  Even for documents that match my search terms, there may be many that are well known (matching cryptographic hashes to known files or files I've already examined).  Using this method, I can get through tens of thousands of files quickly, providing my search terms are correctly defined.

So Mom, I opened this post by telling you about forensics works of fiction like CSI: Cyber, NCIS, and Scorpion.  I'd like to help clear up another fictional forensics story.  We both know the FBI has recently undertaken a very high profile forensic investigation involving large quantities of email.  The FBI claims to have investigated all of the email and cleared the suspect.  Some people have trouble understanding how this can happen so quickly (even though they apparently believe the timelines on NCIS, it's still on the air).  You mentioned this to me today in a phone call and I'm going to set the record straight.  You also know this isn't politically motivated since you know I'm nauseated by both candidates.


Let's examine a few facts about the recent FBI case:
  • The computer was never used by the subject of the investigation
  • There are 650k emails on the suspect machine
  • The number of emails in the 650k that match the blacklist is probably very low compared to the overall number of emails
  • Some number of these emails on the blacklist may have already been examined in a previous investigation
Considering these facts and knowing how email and files are processed in forensics investigations, it's completely possible that the FBI processed 650k emails for the context of this investigation in this timeframe.

Mom, it's fine if you want to distrust the FBI.  A little distrust of authority is actually probably healthy.  Say that the FBI is actively covering something up.  Say that if they release the emails, Putin will start WW III.  Say whatever you want, but don't subscribe to the "FBI couldn't have done it this quickly" narrative.  It's not only wrong, it's provably wrong.

Also: Mom, I love you.

* For the person who hit me up on Twitter assuming that I was picking on all moms with this post, let me set the record straight.  My mom, a great professional nurse, nursing administrator, etc., has a complete lack of understanding about technology.  I honestly wrote this post in response to one of her shares on Facebook that highlighted some mistruths re: forensics.  

Wednesday, November 2, 2016

DNS shenanigans - 3rd party disclosure of non-public data to swing an election

Earlier I posted about the possibility of disclosing private data to change the course of an election or otherwise influence public policy. Little did I know that slate.com was already working on a story to do just that.

A researcher identified only as Tea Leaves disclosed that he had access to passive DNS logs from one or more ISPs.  Using this DNS data, he can see who is communicating with who (as long as they use domain names to communicate).  I'm going to ignore the obvious ethical issues of disclosing private data in this post (more on that later).  I'm also going to avoid the fact that the conclusions of the slate.com story have been debunked.

I do think it's worth discussing what researchers could do with ISP grade passive DNS data so you understand what can be done with it if someone were to abuse the data.  You should know that most researchers with passive DNS data only see that resolutions occurred, not the specific IP addresses that made the requests.  It is this unique data that allowed Tea Leaves to make the Trump server "conclusions."

A few things you could do with similar data:

  • See all the websites you visit and attribute these visits to you specifically and what times you visit them
  • Find all the porn websites visited by your local politician
  • Find out all of the freaky porn sites you visit and blackmail you
  • Figure out who your bank is
  • Track your shopping habits and deliver targeted advertising by snail mail
  • Find out what antivirus software and other third party software is used in a corporate network
  • Discover a publicly traded company that has been compromised by malware and short their stock

I'm sure there are some other ideas out there, this is just a partial list to get you thinking.  Yes, please tell me this is why you use a VPN. I'll say that's better, but doesn't in any way minimize the seriousness of what Tea Leaves did.  Also, anyone with ISP level data access can tell when you are using a VPN and then look for DNS leaving that location as well.  Sure there may not be a 1:1 correlation between a particular DNS request and you (others could be using the VPN server) but given enough data you can probably paint a pretty complete picture.

Malware researchers need access to this data at an ISP level - there are some very important benefits here that help keep the Internet safe.  But in order for this data to continue to be shared, we have to police our own community.  We can't prevent researchers from leaking privileged data (even NSA and FBI can't seem to stop this).  But we can actively dox and shun those who abuse their positions of trust.

Monday, October 31, 2016

New Shadow Brokers dump - thoughts and implications

In case you missed it, the Shadow Brokers just released a list of keys that are reportedly used to connect to Equation Group compromised servers.  While this dump doesn't contain any exploits, the filenames do contain the IP addresses and hostnames of machines on the Internet.  Presumably these are all infected machines, some are reporting they have been used as staging servers for other attacks.

If your organization owns one of the servers in this dump, obviously you should be performing an incident response.  But the Shadow Brokers themselves recommend that you only perform dead box forensics, taking disk images in lieu of live response.  This quote was taken from the Shadow Brokers' latest post.
"To peoples is being owner of pitchimpair computers, don’t be looking for files, rootkit will self destruct. Be making cold forensic image. @GCHQ @Belgacom" 
If you're one of the organizations impacted, but you're not comfortable performing dead box forensics on Unix machines (most or all of these machines are Solaris, Linux, and HPUX according to those performing scans) talk to us at Rendition Infosec - we'd love to help.


What's interesting is that we now have a list of victims of an apparently government organization (Equation Group).  To my knowledge NSA has never openly admitted these are their tools, but every major media outlet seems to be running with that narrative and we have no substantive evidence against it.  

Cyber insurance coverage and nation state hacks
Let's assume that at least some of these organizations have cyber insurance.  There are some interesting questions here.  First, these hacks appear to be pretty old and many likely predate the purchase of cyber insurance.  How does the cyber insurance handle pre-existing conditions?  Even if the policy cover a pre-existing hack, the bigger question I have involves the "Act of War" exclusion in many policies.  

If we assume that Equation Group is a government organization (e.g. a state sponsored hacking group), does the compromise of the identified in the dump constitute an Act of War?  Since this is presumably only espionage and not attack, the answer is probably no. 

But suppose an organization hacked by Equation Group via one of these compromised servers detects they are being hacked.  Suppose they hack back and cause damage to the organization who owns one of these redirection servers?  What then?  Does this constitute an Act of War?  And if the insurance company thinks the a state sponsored hack is an Act of War, who has the burden of proof?

In short, I don't have the answers here.  But these are great questions to be asking your insurer.  I know I will be.

Sunday, October 30, 2016

How to throw an election - aka who has your data

I was thinking today about how much of my data is in the cloud with different service providers.  My email is hosted by Gmail. My chats are on gmail, Slack, Skype, etc. My Twitter DMs.  All of this is "guaranteed" to be private by the service providers, but as we've seen with NSA's recent problems with Snowden and Martin, even the most secure environments have leaks.  I'm not that interesting, so it's unlikely any service provider insider would leak my personal data.  But what data might they be motivated to leak?

As I was considering the "service provider insider" idea, I thought about two distinct scenarios where an insider might be tempted to leak data.  I'm sure there are more, but the two I can think of off hand are someone shorting a stock and someone influencing an election.

Election tampering
There are radicals on both sides of the aisle who probably view job loss, financial penalties, and perhaps even jail time (let's be honest, it wouldn't be much) as a small price to pay for swinging a presidential election.  I'm sure that Trump and Clinton have both said things using service providers (whether Twitter DM, Gmail, Skype, etc.) that they'd like to forget.  I know I have.  If someone released non-public data from a candidates communications, that could easily swing an election.  This is probably more damaging in smaller races, but depending on the data released, I could see a national election turning.

Stock shorting
If you mined non-public data (like Gmail does all the time), you might find information that leads you to believe that the price of a particular stock is going to fall.  In this case you can short the stock and reap the rewards.  But what if you short the stock and the stock price rises, possibly because the damaging information hasn't come to light?  Leak that data and cash in on that lower stock price!  Of course this is illegal, but that's not the point.

Where is your data?
If you've got the easy stuff checked off for infosec, step back for a moment and consider what damage your non-public data could do to your organization.  Insider threats are real.  Most mature infosec organizations understand insider threats and are looking for insiders in their organizations (with varying levels of success).  But are you considering the threats an insider at a business partner, service provider, or other trusted party?

Closing thoughts
A good data inventory will help organizations prepare for insider threats, no matter where they occur.  Tabletop exercises are invaluable in evaluating your insider detection and containment strategies.  If you need help with a tabletop, please hit me up over at Rendition Infosec and we'll be happy to help.

Thursday, October 27, 2016

Playing with fire and bug disclosure

A teen in Arizona has been arrested for hacking iOS devices to dial 911 repeatedly.  In one case, a 911 call center was reportedly so overwhelmed with calls that it "almost shut down."  The original press release is here.

But is this arrest warranted?  The teen wanted to display his elite hacking skills on iOS and claims to have accidentally pushed "the wrong link" to Twitter, where more than 1800 people clicked on it, congesting 911 centers.  The hacker known as "Meet" said that he intended to deploy a "lesser annoying bug that only caused pop ups, dialing to make peoples devices freeze up and reboot."


I think this admission is where Meet is in trouble.  He admits he intended to commit a crime by causing denial of service to devices he does not own.  If his statements are taken at face value, he did not mean to disable the 911 system.  But the fact is that he disabled the 911 system in the commission of another crime, the attempted DoS.

The denial of service is obviously concerning, but it raises several important questions, such as:
  • If Apple's bug bounty were open and available to all researchers, would Meet have tried to market his exploit there instead of this "prank" gone bad?
  • Should Meet be punished for the damage caused or the intent?
  • In a case like this where a cyber attack has a potential impact on life safety, do special circumstances apply to sentencing since lives may have been endangered?
  • If legal frameworks don't exist to do this today, should they?
I'm intentionally ignoring the potential for cell phones to take down 911 call centers here.  Plenty of news outlets are already doing a good job of sensationalizing that aspect.  They don't need my help.  I'm much more interested in the difference between the suspect's impact and intent.  We talk about that in the SANS cyber threat intelligence (CTI) class.  As CTI analysts we have to focus on the adversary intent since that tells us much more than the impact observed, especially when those things don't cleanly match.

What about Meet's friend?
According to the press release Meet was notified of the bug by his friend.  Does this make his friend an accomplice?  It may depend on his friend's intent in sharing the bug with him.  I think the fact that the Apple bug bounty is a closed ecosystem is significant here.  It seems especially likely that the friend might reasonably expect Meet to cause some sort of mischief with the bug - it couldn't have been reported to Apple under the bug bounty.


Thought Exercise
Suppose you plan to rob a convenience store, and I agree to be your getaway driver.  If during the robbery, you kill the clerk, I can be charged with murder.  This is true even if:
  • I never fired a shot
  • I never held the gun
  • I didn't know you brought a gun to the robbery at all
Applying this same standard to the cyber domain, a question of liability looms large.  What liability and culpability does Meet's friend have in this case?  Smarter people than me will definitely answer, but it's a question we should be thinking about now before we have an issue.  I share threat and vulnerability data all the time.  What happens if someone does something malicious with my vulnerability data?  Do I share in the liability?  

Monday, October 24, 2016

Vulnerabilities in St. Jude medical devices confirmed by independent 3rd party

An independent third party (Bishop Fox) has confirmed many of the claims made by MedSec and MuddyWaters about the vulnerabilities in St. Jude medical devices.  St. Jude filed a lawsuit after MuddyWaters released information about security issues in their devices and reportedly shorted St. Jude's stock.

The report (located here) details a number of inaccuracies in St. Jude's claims, which they swore to the court (under penalty of perjury) were true to the best of their knowledge.  This is a bad place for St. Jude to be in.  It appears that St. Jude is either:

  1. Incompetent at security, so much so they can't reproduce a problem even after being notified about it by a third party
  2. Lying to prop up it's stock price

The latter is illegal, but the former is likely to be problematic in a civil case.  How will jurors trust any St. Jude security personnel who take the stand?  Their very credibility appears to be substantially compromised at this point.

There's a lesson here for organizations making "knee jerk" reactions to public statements about their security. When your security sucks and you're called on it, that's bad.  But when you have time to confirm reports of vulnerabilities and fail to do so, that makes you look REALLY bad.  In the interest of full disclosure, MedSec and MuddyWaters didn't provide proof of concept code to St. Jude, but did provide that (and additional details about their discoveries) to Bishop Fox, who confirmed their findings.  This is not illegal in any way, though some might find it unethical.

You should read the report, but I'll point out some of the more damning claims below.

Researchers used a bag of meat to simulate human tissue in their tests.  For obvious reasons, they didn't deliver shocks or test findings on real patients.  This was to contradict St. Jude claims that attacks would not work in "real world" environments.


The problems with the excerpt above are obvious.  This cuts to the core of St. Jude's credibility in its ability to assess security concerns.  Obviously damning in any civil case.


This is another huge problem for St. Jude.  St. Jude says that access controls prevent anyone but a physician from unauthorized access, but this statement is demonstrably false.


Again, more demonstrably false claims.  Bishop Fox researchers were able to replicate the attack described by MedSec.


By far, the most damning claim is that the key space used for "encryption" is only 24 bits long.  Earlier this year, the FTC settled with a dental software manufacturer for not using AES to protect patient data.  The dental software certainly isn't life saving. I'd say St. Jude has a problem on their hands.

The lessons about addressing cyber security problems in your products are obvious, particularly if you are a publicly traded company.  When confronted with a notification of a security issue, you should move to address it (and the public) quickly.  But don't let your desire for a quick release of information lead to releasing demonstrably flawed data to the public.  In my assessment, the Bishop Fox independent confirmation of MedSec's findings deals a lethal blow to St. Jude's civil case - and maybe even their future business.

Sunday, October 23, 2016

So you found a vuln in that botnet code. Now what?

The awesome @SteveD3 on Twitter (you should be following this guy) asked a great question recently.
Question: If a botnet has vulns in its code, such as overflows or path manipulation... what could the good guys do w/ these flaws?
This is really thought provoking, and although I can probably answer this in 140 characters, the answer really does deserve a little more attention than that.

The short answer is "not much, at least not legally."  If you work for a government agency with special authorities, you are special.  Stop reading here as this doesn't apply to you.

If you don't work for a government agency, you could disclose this vulnerability (e.g. sell it) to a government agency and profit from your hard work (assuming they are willing to pay or haven't already found it themselves).  This might be of some use for the government agency obtaining data.  After all, there's no reason to exploit targets yourself if someone else already did the hard part for you (and then installed vulnerable malware).

If the vulnerability is on the command and control server, that likely belongs to a third party.  If you exploit it (even in the name of doing good) you're breaking the law.  If you find a DoS vulnerability in the C2 software and exploit it, you're still breaking the law.  This is true whether the attacker directly leased a VPS or compromised some poor website running Drupal (yeah, I'm kicking the slow kid here, but I can't help it).  In any case, you are exceeding your authorized access on the C2 server and that's a CFAA violation.

A SANS student posed a similar question to me a while back.  The question assumed that he found a vulnerability in botnet client code that would allow him to take control of infected nodes.  He hypothesized that even if taking control of the compromised machines was de facto illegal, nobody would care if he did it just to patch the machine itself and remove the malware.  But this brings up several important questions of its own, such as:
  1. What if the system you patch is a honeypot that is being used to track commands sent from the C2 server? In this case, you've exceeded your authority and interfered with the collection operations of an organization, potentially causing material harm.
  2. What if the system should not be patched due to third party software running on the machine? What if this is an industrial systems controller and patching causes physical harm?
  3. What if everything should have worked, but something goes awry due to unforeseen circumstances (you know, because Windows).  What then?
There are just too many unknowns to really do anything meaningful with a vulnerability like this (at least while wearing a white hat and carrying liability insurance).  As we tell our clients at Rendition Infosec, the easiest way to avoid unforeseen consequences of hacking back is just not to engage in it in the first place.  

Friday, October 21, 2016

Martin classified data leaks - pretrial court documents

In response to a pre-trial detention hearing, United States attorneys filed a motion to deny that Harold Martin III being released from prison.  Based on this document, we know a lot more about the strength of the government's case.

Misrepresentations or misinterpretations?
I've seen some pundits with poor reading comprehension misinterpret two things in this section.  First several pundits said that Martin had "dozens of computers."  That's not what the statement says.


The government says that they seized "dozens of computers and other digital storage devices" which is far different.  The wording may be intentionally designed to make the judge believe that Martin had dozens of computers.  But this isn't surprising to me.  Martin is a practicing infosec professional.  Take one trip to BlackHat or RSA and you can bring back a dozen or more USB devices from vendor booths.  Assuming that at some time in his career Martin went to a security conference (or many such conferences), he would likely have dozens of digital devices.

The other misinterpretation I'm seeing a lot in the media is that Martin stole 50TB of classified data.  But the government never makes this claim.  They only claim that they recovered 50TB of storage devices from his residence.  They never discuss (and honestly probably do not know) what percentage of the storage media contains classified data.

Handwritten notes
This next excerpt is particularly damning.  A document recovered from Martin's residence contains hand written notes, seemingly for explaining the document to those who lack the same context he has.


If the government is to be taken at face value, it appears that Martin was planning to pass this document to a third party. Whether Martin intended to pass the printed document to a reporter or a foreign government, the allegation is highly disturbing.

Are we still doing this "Need to know" thing?
This excerpt suggests that Martin had documents in his possession for which he had no need to know.  In a post-Snowden NSA, this seems a little cavalier - how did Martin come into possession of this very sensitive need to know document?


Documents stored openly in the back of Martin's car
This is huge - it's pretty amazing to think about classified documents stored openly in Martin's home and/or in the back of his car.


Later in the document, the government points out that Martin's residence lacks a garage. This means his car was parked out in the open at nights, probably with classified data storage.  The government states that's how they found it when they served the search warrant.

Classified theft may have begun in 1996, but the government doesn't claim that
The documents state that Martin had access to classified information starting in 1996.  However, they stop short of saying when he first started stealing data.  Many media outlets have talked about how he has been stealing data for 20 years.


Read the filing carefully however and you will see that there is no mention that Martin stole data for 20 years, only that he's had a security clearance that long.

Disgruntled? Yeah, I'd say so...
In 2007, apparently Martin drafted a letter to send to his coworkers.  It appears that he's a little vindictive and disgruntled.  Feeling marginalized (and wanting to feel important) is one of the reasons people commit treason.  Their failure to allow Martin to "help them" may have been a catalyst for the treason.


And of course "They" are inside the perimeter.  If the government's claims are to be believed, nobody knew this better than Martin himself.


That's all folks
It could keep writing, but this is probably a good place to drop off.  If you're really interested in more, you should read the source document.

Prosecutorial deception in the Harold Martin case

The government has released its arguments for pretrial confinement for Harold Martin.  Most of the arguments in the document are sound, but one section is nothing short of deceitful.  Unfortunately it makes me question other elements of the government's case.  Even if the rest of the case is solid (and it appears to be) the prosecutor here should be disbarred for attempting to deceive the judge in the case through misrepresenting facts.


Martin engaged in encrypted communications. So what? If you are reading this blog, chances are you are also "engaging in encrypted communications."  Martin had remote data storage accounts.  Again, so what?  This statement could be true of anyone with a Gmail address or an Office 365 account.  I'm not impressed.  Martin had encrypted communication and cloud storage apps installed on his mobile device.  Cloud storage apps?  If he's an iOS user, that's iCloud.  If he's an Android user, he has Google Drive installed by default.  Encrypted communication apps could refer to iMessage, Gmail chat, or even Skype.

Taken at face value, the government's case against Martin seems strong.  So why then does the government reduce its arguments to sweeping generalities?  I can see three probable explanations.

  1. The prosecutor doesn't know any better.
  2. The prosecutor thinks the judge doesn't know any better.
  3. The DoJ is setting precedent so these circumstances can be used later to get pretrial confinement for a suspect.

The pessimist in me thinks it's probably the latter of these.  I hope the EFF and other civil liberties groups weigh in on this, the precedent is highly disturbing.  Trying to obtain pretrial confinement by arguing that the defaults on a user's phone are somehow malicious is a gross misrepresentation.  I hope the government amends its filing to more clearly represent the facts.

Saturday, October 15, 2016

CIA cyber-counterstrike probably not a leak after all

Recently, NBC News ran a story that CIA is planning a cyber counterstrike against Russia to retaliate for interfering with the US elections.  Initially, I saw people taking to Twitter talking about how "loose lips sink ships" and other such cliches.  But is this a leak at all?  I think it's at least worth considering the other possibilities here.

Theory #1 - this is total misinformation
While CIA is an intelligence organization, their recent history of leaks has been a little better than NSA.  It therefore feels less likely that this was leaked by CIA sources directly.  Also, as Wikileaks pointed out in a tweet, CIA is probably not the right organization to carry out such a mission.


Now I don't usually cite WikiLeaks as a reliable source, but I think they are probably right here.  If this isn't a job for US Cyber Command, what would be?

Theory #2 - This is an exquisitely planned information operation
If you're not familiar with military deception operations, now would be a great time to fix that.  We're very likely to see a larger number of these in the future as cyber conflicts between nation states become the norm.

This "leak" feels to me like a deception operation designed to undermine the Russian people's confidence in their government.  That's the only reason I can think of to mention the CIA in the leak vs. the NSA or US Cyber Command.  The Russian people know of the CIA just like we know of the KGB in the US.  NSA and Cyber Command just aren't household names there.

How much will this "leak" impact Russian government information security operations?
While the leak may increase awareness of cyber attacks at the rank and file level, it isn't likely to change the Russian government's plans or information security posture in any way.  Whether or not the Russians are responsible for the DNC hack, now that they've been called out by US intelligence agencies, they are doubtlessly preparing to defend against a retaliatory cyber attack.  Saying "we're going to hack you" is completely unnecessary to prompt the Russian government to prepare for such an attack.

Increasing confidence of US citizens
If this is an information operation and not a leak, it does much to pacify the average US citizen who otherwise sees the Russian cyber attacks as being largely ignored.  At least now they can point to this operation and feel like Russia hasn't "gotten away with something."

Bolstering recruiting
Whether this is a true leak or an information operation, it almost certainly benefits the US intelligence community's ability to recruit future cyber operators.  "I can't tell you this was us or that you'll have the chance to stick it to Russia, but did you see that story about the US retaliating?"

What do you think?
I'd love to know what you think about this too.  Please feel free to continue the discussion on Twitter (I'm @MalwareJake) or post your thoughts in the comments section.


Saturday, August 20, 2016

Internet God Mode

Need a Konami code for pwning the Internet? NSA has some. Well more technically everyone has some of them now that they've been leaked by Shadow Brokers.  Firewalls effectively segment your internal networks from the Internet and remote, unauthenticated exploits against them undermine the security models of most organizations.

I've noticed a lot of people on Twitter talking about how they don't care about three year old firewall exploits.  But let's be clear that many of these exploits are still not patched today.  Some pundits have noted that many products targeted by the exploits (e.g. PIX) are not very commonly uses today.  Point taken, but they were much more common three years ago when this tool cache was originally created.  How often do you change the private keys on your VPN? For compatibility reasons, did you roll your PIX private keys forward when you upgraded to an ASA?  If you aren't sure, I would recommend changing your private keys on your VPNs.  It's relatively easy in the scheme of things.

Internet God Mode for CNE operators
Need to pwn the Internet? Firewall exploits will help...
How should your defensive strategies change when you consider your firewalls to themselves be compromised? I'll cover that in depth in a later post.  But the firewall exploits released to every attacker on the Internet are seriously disturbing.  We should not downplay the significance - even if some products targeted are no longer supported.  Many organizations run unsupported hardware and software.

Two months ago at Rendition Infosec, I worked with a well meaning organization with 10 year old IOS on their routers and 25% of the environment still touting XP and Server 2003.  This is an extreme example, but most organizations have some percentage of unsupported software and hardware for a variety of reasons - usually involving budget.

Finally, it's worth noting that it's highly unlikely that NSA has stood still in its firewall exploit program since this tool cache was stolen in 2013.  In the last three years, it's likely that NSA has researched and acquired other firewall exploits that work against more modern platforms.  I've seen some very dense people (on Twitter and elsewhere) suggesting that if you want to be safe from NSA, just deploy Palo Alto.  They claim this is a good idea because there were no Palo Alto exploits in the dump.  This, like many other comments about the dump, are extremely myopic.  Who knows what NSA has today for firewall exploits and implants?  I certainly do not, but this release will certainly change the way I think about defense in depth.

Thursday, August 18, 2016

Cisco downplays SNMP vulnerability exposure

Unless you've been living under a rock this last week, you know that NSA's firewall hacking tools have been stolen and at least a subset of them have been subsequently released.  At least a subset of the tools are are used to exploit and implant malware on devices produced by US companies.  One of those vulnerabilities, an SNMP vulnerability (code named EXTRABACON) affecting Cisco products, has been downplayed in a somewhat disingenuous method by Cisco's security team.

Look, nobody likes to be faced with an 0-day.  And it's an extra huge slap in the face to know that not only did your government discover the vulnerability before you did, but they kept it a secret from you for at least three years.  But slap in the face aside, now that the secret is out there it's time to take responsibility.

Cisco's blog correctly notes that the attacker has to know the community string and must talk to an interface with SNMP enabled.  By default, this is only the management port.  But in the field, very few organizations use this configuration.  Many/most have SNMP enabled on all internal ports, despite best practices.  We often find that SNMP is enabled (at least read only) on the DMZ interface in customer environments.  We advise against this of course, but I want to deal in reality instead of the "this is almost never exploitable" vibe Cisco uses in their blog post.  We have even seen SNMP accessible from the Internet. While that's criminally stupid, it can and does happen.

Cisco's diagram of EXTRABACON exploit scenarios
Cisco says in their narrative "In the example above SNMP is only enabled in the management interface of the Cisco ASA. Subsequently, the attacker must launch the attack from a network residing on that interface. Crafted SNMP traffic coming from any other interface (outside or inside) cannot trigger this vulnerability."  But that relies on the user understanding the example and correctly evaluating whether their environment is identical to the example.

Don't think this is a problem?  One of my Rendition Infosec customers already called to confirm this could only be exploited through the maintenance port.  They read the article and fell for the "in the default configuration..." double speak. The problem is that they don't use the default config so that doesn't matter. An attacker in their network, anywhere in their network, could use this exploit against their ASA.

To the point that the vulnerability requires you to already be in the network, let's talk about that.  So what?  Phishing gets me in the network nearly 100% of the time.  And how long do you need to be in the network using your phishing access to exploit and implant a firewall?  I don't know, but I'm guessing not long.  Once that happens, instead of protecting the organization, the firewall actually becomes a liability.


The firewall is a point through which all traffic in the network flows.  It is not easy to perform incident response on a firewall (e.g. an ASA).  In most cases the firewall itself is directly accessible from the Internet.  The firewall being compromised is also not part of the threat model that most organizations think about.  That obviously needs to change in light of the NSA tool disclosures, but my point is that this is a devastating vulnerability - there is no point in downplaying it.  If I'm in Cisco's shoes, I'd be screaming foul play from the rooftops to my elected representatives.

Wednesday, August 17, 2016

On cover terms

Cover terms, or "code names" as they are often called serve a very useful purpose in a wide range of operations. Their value in intelligence is undeniable. They are also useful in enterprise incident response (IR). As a consultant, I sometimes find myself needing to take a phone call in less than opportune environments and cover terms for customers and particular incidents help to keep me from disclosing any confidential information.

But there's an art to selecting cover terms for incidents.  A few guidelines I follow are:

  • Don't base the term on the name of the client (it's not much of a cover)
  • Don't make the cover term the same as the name of the malware used (many different attacker may use variants of the same malware)
  • Run your names past your PR department

This last one (involve the PR team) is pretty important, but is rarely done. Experience has taught me to assume that everything will get out to the press eventually. You don't want a funny inside joke name to get out in the press.  What's funny with the appropriate inside context, it probably won't be funny absent any context. That makes your organization look really bad.  Over the years I've seen lots of obscene and questionable cover terms.  In my younger, dumber days I might have even created a few myself. But I know better now.

Why am I bringing this up?  The Equation Group tool leaked files being auctioned have a large number of tool cover terms in it, many of them questionable.  For instance, I can't help but notice the obviously phallic undertone in the large number of BANANA related terms (e.g. EPICBANANA).  Either that or someone maybe just loves bananas.

My personal favorite in the cover term set released has to be BUZZDIRECTION. Whoever snuck that past the cover term censors is a freaking genius at word play. At first glance it looks totally innocent, but try saying it fast once and you can't help but appreciate the adolescent quality it has.  Totally innocent mistake? Given the other phallic references, I highly doubt it.

While others focus on the exploits and tools themselves, I figured I'd focus o this somewhat less obvious implication of the leak - namely that you must assume everything will be leaked eventually. A little care up front can prevent your organization from looking like a beer fueled frat house in the press later.

Monday, August 8, 2016

QUADROOTER - is the sky really falling?

Check Point released a 4 pack of root vulnerabilities in Android at DEFCON.  They named the group of vulnerabilities QUADROOTER, presumably because they are four vulnerabilities that result in root access on Android.  One of the first media articles I read on this actually has the headline "the sky is falling."  Um, lets dial that back three or four notches...

At Rendition Infosec, we deal in realistic risk.  Let's distill out the hype and talk some facts about the vulnerability:

  1. It appears to require the user to install a malicious application to exploit anything.
  2. The classes of vulnerabilities present are unlikely to remotely exploitable if a user simply views a malicious webpage.

So how would an attacker exploit any of these four vulnerabilities?  Simple: they'd trick a user into installing a malicious application.  Let's hope that the app store is looking for applications exploiting these vulnerabilities at this point.  If not, shame on Google.  If so, the user would have to side load the application as a malicious APK or install it from a rogue app store.  Sure, a vulnerability rooting the phone is bad. But a malicious application can do some pretty bad stuff without rooting your phone.  The sky simply is not falling, despite Chicken Little's best wishes.

On responsible disclosure
I'm not one to debate the merits of responsible disclosure. I have some pretty mixed opinions on this topic.  But when you disclose vulnerabilities on a conference schedule rather than vendor patch schedules you lose the moral high ground.  I am not personally against full disclosure, but just remember this day if/when Check Point says something about someone else's disclosure practices.  The fact is that these vulnerabilities won't be patched until September at the earliest.

On naming vulnerabilities
If you follow the blog, you'll know I've been critical of this practice.  This name is especially confusing since it details four separate vulnerabilities.  Let's hope these all get patched at the same time to avoid creating more confusion. Also the vulnerability name sounds like what you'd name a drone.

It's just a freaking jailbreak
We don't name jailbreaks and write white papers about them. In fact, people laud them so they can break free of Apple's tyrannical grip of their iOS devices. Why are these Android vulnerabilities to be feared and iOS jailbreaks are something to run as quickly as possible before Apple patches it?

Collecting data...
I don't understand for the life of me why Check Point chose to put their white paper behind a data collection wall.


If you are really "just interested in warning the public" don't require people to enter their data to read your paper.  That's a grade A dumb move.  Here's to hoping that data collection wall comes down so more people can easily read the source data about this Android jailbreak.   A Twitter friend shared the link with me (and anyone else who wants to search for it) and I'm sharing it here. Suck it Check Point.

Practice safe apps(?)
Unless you find yourself connecting to app stores other than Google Play Store, downloading apps over insecure wireless, or have been repeatedly tricked into installing malicious apps on your phone, you probably don't need to worry about QUADROOTER.

Final score:
+10 points to Check Point for finding the QUADROOTER vulnerabilities
-1 point for putting up a reigstration wall
-3 points for completely unnecessary hype
-4 points for scaring my mom - she's a technotard who can't read past the hype