Haiku #32 Executive metrics


Mean time to find, fix

Two very useful metrics

Do you use others?

Finding the right terms to communicate to executives and boards is more art than science. How do you measure success?

Every single tool employed by our Security Operations Center comes with a litany of metrics. Events per second (EPS); Endpoints protected, files/processes blocked/quarantined; alerts triggered; files/processes permitted; emails captured with potential malware, malicious URL, phishing, spoofing, domain spoofing; percentage of people caught by ethical phishing. All of these are great tools for analyzing the effectiveness of the tool, but they prove to be less useful when communicating to a group of people who want to know why one $200,000 investment should be given higher priority than another in the budgetary process.

As a non-revenue producing cost center, information systems departments and cybersecurity functions must find ways to communicate several concepts, stripping out industry jargon or product specific technical terms. One method of conveying this information is in terms of risk. What’s our exposure factor? What’s the likelihood this (an event that this tool could prevent) will happen in our environment? If that event happened, how much could it cost our organization? What’s the cost of implementation (including professional services for implementation is most accurate)? What’s the difference between the two? Are there ways to mitigate the problem without this solution? What do those methods cost?

Another valuable model for clarifying what a tool could accomplish or has accomplished is the event timeline. Boards and executives are increasingly aware of how long criminals were operating detection-free in another company before they were found. In my research, a common term for this is mean time to find. We use this when communicating about the budget. We come up with current examples that are determined by proofs of concept. We compare those performance numbers against past performance in the same area (presumably worse without the tool). Then we compare the two with a percentage of improvement. Leadership is also alert to how long knew about a security event before action was taken. We refer to this internally as mean time to fix. These numbers get the same analytical rigor and are presented at budget time as percentages of improvement.

Not all tools can be measured in quite the manner because the problems they solve aren’t quite so cut and dried. When that happens we start looking at how we can map those tools to either regulatory compliance or maturity model mapping. As a healthcare organization, we use HIPAA regulations. We also look to a couple of frameworks like CIS Top 20, NIST, and attestations made to health insurance companies (payers). We review the questionnaire for Most Wired Hospital. All of these various metrics are somewhat less concrete, but they help shape the picture that our team is using recognized benchmarks to self-evaluate and stay with industry leaders to identify and employ the best measures possible to secure our community within the given means.

Haiku #29 Non-binary choices


Safety. Security. 

Do they negate privacy?

Why can’t we have all?

The anti-liberal movement seeks to regulate a free & open internet in the name of internal security. We must lead the effort in a free and open way.

Any time a major cyber attack occurs, rhetoric from various world leaders gets issued from national spokespeople or the leaders themselves that there must be some sort of government regulation to limit the type of behavior that would allow the type of attack to repeat itself. Of course, this furor seems to have a short life cycle because it seems to die down as quickly as it starts. I’d like to peel back the onion on why that is.

As a nation of laws, we rely on the vast majority of people to abide by them for order to be maintained. When laws are broken, criminal acts occur. If we create a law, then that doesn’t fundamentally alter the landscape of this equation. Lawbreakers will still break the law to achieve their own desired end state. Each law that we put on the books needs to be maintained and enforced. How would laws to lock down specific behaviors on the internet look? They might bear a strong resemblance to the laws for internet usage in places like China, North Korea or Iran…places that specifically limit certain types of behavior to preserve their own internal domestic political balance.

I”m certainly against the attacks continuing to happen and to the extent that we can, perhaps there are other methods to achieve the same end. Each would require their own enforcement tails so to speak…the costs associated with making sure people are doing it. Some measures seem obvious. Firewalls, end point protection, and security awareness training are methods to address different levels and types of nodes on the network. If we go one step farther, then we might look at DNS filtering or web content review and establishing a baseline set of norms on how companies and individuals should set up these boundaries. But this is exactly the gray area where anti-liberal governments start placing controls to manage domestic behavior. It’s a fine line to walk: one person’s free speech is another person’s hate speech or echoes of sedition.

Cyber leaders then are left to propose controls in an ungoverned way that keeps corporate populations safe from harm without infringing on the rights of others. We may somehow write laws that govern what people can look at, but in reality this would only increase the amount of perceived criminal activity. It would not intrinsically change the nature of the attacks. The attacks are designed to circumvent people first, then systems second. By compromising the people through social engineering, they get tacit “permission” for their action. How could we possibly write any kind of effective limitation to criminalize phishing attacks? And the follow-on steps beyond phishing attacks are already criminalized so what deterrent effect would such proposed laws have? The laws that are already in place do not eliminate criminal behavior.

Any proposals that would lock down the internet to achieve an end of less criminal behavior would essentially give us more people monitoring our daily activity, searching for clues about possible future transgressions without the benefit of stopping those already occurring. Unless we significantly surrendered our freedom and allowed a police state to emerge. Please start taking internal action to address as much possible in as near real time as the organization will allow. The alternative, should it come to pass, might be less palatable. Once surrendered, freedoms are difficult to recover. Sacrificing freedom in the name of security is an appealing, perhaps even comforting, thought in the immediate aftermath of an attack, but the result would eventually erode our modern lifestyle. And the tragedy of erosion is that it often goes unnoticed until it’s too late to recover the lost ground.

Haiku #28 Incident Response


Capital One Hack

Detect, respond, recover

Good work or mistake?

Any time some organization gets hacked, there will be Monday morning quarterbacking. But self-reflection is the most valuable insight; could we have done it better? Are we vulnerable to the same issue?

While the Capital One hack has probably faded from the minds of many, the latest hack hasn’t. Whoever the target was, we’re all looking at the results and some of us are thinking (somewhat uncharitably) that the defenders should have known better, the incident response people should have caught it faster, and that leadership should better budgeted for the proper tools to defend against that particular type of threat.

No one has a crystal ball. If we did, attacks would never succeed. The military adage “no plan survives first contact” is tacit recognition that enemies do not cooperate with our plans. They specifically devise ways to circumvent them and make it their mission to find the weakness that we’ve overlooked. There will always be gaps in our defenses.

Instead of wasting the time immediately after a breach (that occurs at another organization) criticizing who could have done what better, faster, stronger, use the time to conduct an analysis of their response cycle and learn from either their mistakes or successes. Then weave those lessons into the current program. Re-write standard operating procedures, issue new policies, request emergency funding for tools that cover similar gaps.

In the case of the Capital One hack, the response was thorough, disciplined, and ridiculously below the industry average for mean time to find and mean time to fix. Those are solid crisis response metrics regardless of industry. There’s a good reason no one calls the industry crisis prevention; it’s called crisis response or crisis management precisely because it’s understood that despite our best efforts there will always be bad actors out there who attempt to seize control and manipulate our resources in unplanned and unimaginable ways. If our fellow defenders put out their “fire” with speed and tenacity, then we should applaud their efforts, apply their model to the extent that we can, and generally learn as much as we can from their efforts.

Haiku #27 Talent taboos


Please. Cyber talent.

Exercise respect to all.

Interview nicely.

Skills & experience do not excuse bad habits. Be timely. Dress nicely. Treat interviewers w/ respect. Don’t swear, brag or walk on others.

Despite all the hoopla about a cyber talent shortage, there are still a few rules of the road when applying for jobs that one should consider when it comes time to interview. They apply equally for internal and external candidates, but internal candidates should pay close attention because they may have established relationships with interviewers that tend to blur the lines between what’s appropriate day to day and what’s appropriate for our interview to hire someone to a new team.

Timeliness and dress set the tone for an interview. It doesn’t have to be a suit and tie and there’s certainly no requirement to show up more than 10 minutes prior to the interview, but calling on the day asking for directions at the time of the interview, then the moment for thinking through that has past. To some extent, I’d blame this on the interviewer or the scheduler for not clearing this up in advance but the candidate should also recognize the need for advance planning and take appropriate steps. Being late or careless in appearance are signals to the team that a candidate lacks respect for other people’s time and sense of professionalism.

Swearing has a time and place. There are times that its appropriate for emotional emphasis, whether comic or simply in anger. But an interview is not one of them. This is where an internal candidate might slip up because they believe that their normal interaction with a group of panel interviewers is appropriate. It is not. If all candidates are equal, then the one who can distinguish between professional language and casual language is the better fit. At some point and on some level, they represent the team leader who reports to higher level management. Swearing becomes less and less appropriate the closer one gets to the executive.

The shortage of cyber talent may delude some candidates into believing that their qualifications make them the best possible candidate for a job. Maybe they’re just the best available candidate. Either way, there’s a fine line between self-confidence and being a braggart. The former asserts their skill and knowledge. The latter uses it as a baseball bat to beat others into conversational silence on technical matters. When team leaders look for traits in candidates, they want people who will make the team more effective and bring immediate value. Those who alienate interview panelists will exhibit the same behavior once hired.

Interviews should be a balanced conversation. There’s a little bit of give and take. If one person is dominating the conversation, then there might be a problem. Of course, the interviewer wants to know all about why the candidate is the best choice, but if the interviewee lacks the internal governor that alerts them to not interrupt or not essentially tell an interviewer that their views are immaterial, irrelevant or unimportant, then those behaviors will play out at work, too. Team dynamics are important. A panel interview should be used as a way to see how the candidate interact with the team. And it may become apparent that they won’t be a good fit simply because they value their own opinions higher than those of others. No one likes a conversational bully or an oxygen thief.

We all want to grow our teams effectively and we want the best candidates. Sure we also want talented people. But we want them to be multi-talented not single sided. The best candidates will understand that people skills like respecting and valuing the input of others is just as important as mad tech skills.

Haiku #24 Privacy, please


Fearful of breaches,

The public looks for answers

when privacy breaks

The human component of cyber security is that our safeguards are intended to guard people’s right to privacy. When that fails, people want to understand why; even if there are no easy answers.

In every breach there is an action (or inaction) that allowed the attacker in the door. Whether it’s a patch that wasn’t properly applied or a phishing email that someone inadvertently clicked on. When that happens, the incident response plan kicks in and the security team starts addressing each issue with surgical precision. That surgical precision needs to be balanced with compassion for at least two sides of the equation. First, the people who will need to be notified of the breach. Second, the person whose actions “caused” the breach. And third, the co-workers who did not “cause” the breach.

For the public, there will always be someone who seeks to restore their own sense of balance and fairness through litigation. The efforts of the company to pursue security diligently will always be considered insufficient. There will be claims made that the company failed to do enough and assertions that countless amounts of time were spent responding to credit/reputational events that were triggered by the corporation’s security event. In the end, these may be difficult to prove because there are so many breaches that could have triggered potential harm. Once the breach occurs, even when litigation begins, it’s important to remember that people assumed they had a legitimate expectation of privacy, regardless of how much information they themselves have placed into the public spectrum for potential attackers to manipulate. Should the security team be asked to respond to calls about “what happened”, the team should remain as clinical in this response as they are in the technical response…up to a point. When it comes to explaining what happened, remain objective and present anonymous facts. But also try to relate to how that caller must feel; it’s perfectly natural for them to feel somewhat violated. They believed they could expect privacy when the provided personal information and to have that information exposed among the last things they would have expected.

If the triggering event could be tracked to one person, that person may feel a sense of guilt of shame. They may mask it well, but it may smolder just below the surface. They may experience denial or self-doubt or any number of internally directed emotions. Or they may put their “actions” back on the security team: “I never received proper training to handle [such a situation]. This is just as natural as the victim’s response and should be handled with the same level of objectivity. The entire information systems team may have implemented every possible control to prevent the situation, but people seek absolution. In those moments, security professionals must simply look for ways to improve the system. Provide better training, find and implement new technical layers that can address the behavior that triggered the incident.

Co-workers will also have an emotional response. Depending on how embedded the company is in the community, they may feel some level of uncertainty because of the reputational harm to their employer. If an employer is sufficiently embedded in the community, then the incident may successfully weather the news cycle. The question is whether or not they have the resources to apply necessary security response measures. If no cyber insurance is in place, then how much of that harm has a specific cost? Will there be a class action? Will there be a settlement? How much do the technical controls cost to prevent future attacks? Are there increased training costs (direct and indirect)? If the company can’t sustain those costs, then is there a risk that the company will close as news reports suggest in the cases of small businesses across the United States.

Security teams must also prepare themselves for their own internal responses to the incident. A protracted response cycle can trigger unexpected and unplanned stress. While the investigative aspect of the job fuels our intellectual curiosity, staying in that mode can quickly exhaust team members. We must recognize this and allow for adequate recovery when the initial investigation concludes.

Haiku #23 Sneaking past security


Spyware sneaks softly

Slithers past security.

Search, scan, sweep, seek, strike!

Finding the persistent threat may be among the more difficult tasks. Sometimes the best tools don’t find the clues in time.

Setting aside the plethora of information that needs to be sorted through with regard to false positives, sometimes tools make it difficult or even impossible to find threats in real time. That’s probably why the field of threat intelligence and threat hunting was actually built up; because people who build and defend networks daily just don’t have sufficient time to also do those two tasks.

Take suspicious email, since inbound phishing plagues so many organizations. If your native cloud provider of this service does not have sufficient protections in place to block certain types of email attacks for the existing subscription level, first I’d ask yourself if they’re really the partner you want or just the partner that was easiest to get under contract. Second, I’d offer that their tools must be user friendly and make it easier to identify and remediate email threats. If they don’t, then the best decision you can make is move on.

We spent months using native tools to our email provider. Those tools provided a “risky sign-on” report. But when we started digging into this report, it was always delayed by at least a few days. We’d only be able to recognize credential stuffing attacks after they occurred. This might not be a problem if you have Multi-Factor Authentication installed, have a really good password policy that’s backed up by some sort of password validation tool, and have an excellent security awareness program. Sadly, as I write this many are still not there. The “we’re too small to get attacked” mentality is a false one. There’s an inherent myth of security through obscurity in this attitude. In fact, the internet’s anonymity is actually playing against most people who think like this. IP addresses may be attributable. Domain names may have names, but all attackers are looking for are weaknesses. They’re not targeting organizations; they’re targeting the vulnerability.

So when your email doesn’t announce risky-sign ins immediately, then what can happen is that within a few minutes of that risky sign-in the attacker has guessed (or been given by the unsuspecting phishing target) a password that will allow them to all emails. Those emails can contain corporate information, individual account information or any number of useful things for that attacker; none of which bode well for the defenders.

The sign-in notification can be tracked in other tools (like a SIEM) using application interfaces and that sped up our notification on who was signing in from where and whether or not that was expected behavior. The filtering of email was also passed off to another tool, employed outside the SIEM, and now email is more secure and it’s easier to find and fix problems. Time to resolution was actually our biggest complaint about the native tool. Our new email security platform took less than an hour to remediate problems that could take days to find let alone hours to remediate once found. And make sure your security awareness program has people slightly on edge about the security of email. Then they’ll start acting like prairie dogs and bark whenever something really dangerous enters the environment. Otherwise, malware can quietly enter the environment, remain undetected and attack at a time and place of their choosing.

Haiku #22 Selective excision


Persistent presence.

Position, pressure, poise, posture.

Expunge prudently.

Sometimes the adversary’s presence in your environment is too intricately intertwined to be resolved with simple, binary choices. Removal actions might trigger negative response.

Somewhere between the false alarms from the myriad detection tools and the daily operations of a business that isn’t focused solely on just cybersecurity, there appears real behavior that may or may not be an indicator of compromise. One of the current trends emerging are tools that collect meta data (data available without the use of agents) to tell defenders more about their Internet of Things environment. If we rely solely on agents, that information is practically unavailable. Unless its some sort of NIDS agent and those produce exactly the perception of threat that I want to address.

Consider this. Command and control signalling (the kind that malware uses to get its instruction set from its malicious command center) bears a striking resemblance to the the traffic that allows Canon to track its printers in the business environment and keep them well maintained through just in time logistics applications. Or the Philips CT scanner operating a peak capacity and anticipating potential operational lag time before it happens, sending a technician to evaluate and preventing that down time.

The current answer, for many tools (especially in healthcare) is to tell the organization to “whitelist” the behavior of that device. On the surface (and in most cases I’d agree), this is probably the decision that best supports business operations. But there’s a fundamental disconnect here that makes me uncomfortable and I thing IoT vendors (especially those who get devices approved through the FDA) should consider this during product development.

By whitelisting a behavior, organizations accept third party risk. In the case of the CT scanner, the IoT meta data collection tool vendors tell me that the device now has a direct connection to that device in my network. A quick Google search will reveal that in the not too distant past Philips suffered from a cyber attack. Why does Philips not have to demonstrate repeatedly and often that such an attack against their system will not impact me? The best business associate agreement in the world really wouldn’t cover this situation. I might feel better if Philips (or any FDA-approved device manufacturer) were required to produce a SOC2 report demonstrating for each device class that demonstrated how the device was designed with secure interoperability in mind.

The model I have in mind is similar to the military’s Joint Interoperability Testing Center (they’ve probably changed names so please accept my apologies for dated terminology). This organization steps into the procurement process and requires that all equipment pass certain rigorous interoperability tests prior to being fielded on military communications networks. They evaluate so many things that, rather than list them, I’ll just say that it typically adds time to the fielding cycle that some people might feel is overkill. But it ensures that the product works and won’t inadvertently compromise the security required by military leaders.

In practice, this obligation (and risk) is placed on the purchasing organization. Each individual covered entity buying a piece of equipment that has an operating system incapable of accepting a device-level agent that would allow monitoring has to figure out how to protect against that device being co-opted into a botnet later or being used as a pivot point should the vendor get attacked. And if we whitelisted it, as suggested by the vendor, then we opened up a pathway for that pivot point to even exist.

If it were up to me, we’d reduce the risk by eliminating the whitelist. But that introduces other risk: that of potential downtime should some background process fail and fail to get picked up by the “phone home/command and control signal” that the vendor wanted us to white list. It’s a fine line to walk and that’s what makes most cyber risk registers longer than anyone not in the industry can imagine.

Haiku #33 Tactics


Spears, whales, and catfish

Tactics, techniques, procedures

Disciplined process.

Both sides have tools and methodology.

What should be obvious, if not painfully obvious, even to the casual observer at this point is that cyber criminals are just as organized as both their physical world counterparts and their adversaries. While there are obviously people breaking into the business who lack the experience of many of the more seasoned organizations, the products and services they employ are created by experts in the field. People who know how to initiate an attack, make it persist and cause damage at a time and place of their choosing…should defenders not be properly prepared. They’re fully prepared to expose an adversary’s weakness and exploit it for maximum gain…economy of force maneuvers. And we’re not talking about nation states or advanced persistent threats. These are just normal every day organizers of cyber crime.

While many in the cyber security industry have been attentive to the reporting of Brian Krebs for some time, his work on reporting breaking news in cybercrime should be daily reading for anyone wanting to get a feel for the consistency of the tide of criminal behavior.

Understanding the ebb and flow of criminal activity is, in itself, a tactic. Some call it traffic analysis. The type of traffic gives defenders a guide on what types of defenses are needed. Whereas traffic analysis would give defenders an understanding of how robust those defenses must be to withstand the rate of sustained fire that existing defenses are exposed to.

Here’s and example using a useful and underutilized tool. Business email Multi Factor Authentication. Enabling MFA limits the damage done by credential theft but it’s only one step. Another critical step is disabling legacy authentication. This prevents attackers from using other back doors into the account like app passwords. But until the sustained rate of fire on credential stuffing attacks is known and understood, an organization might think MFA is enough. Because breaking legacy authentication requires multi-layered approach that has second order costs. For instance larger organizations may have many older mobile devices that cannot get the OS updates necessary for modern authentication. And buying replacement could be an expensive proposition, depending on the size of the company.

There’s definitely a balancing act that must be done every day to make sure that our communities (our protected work forces) remain as safe as we can make them. In the words of Sun Tzu (and the Marine Corps leadership principle) knowing the enemy has multiple aspects that must be constantly considered to remain effective and assuming that the enemy is somehow less organized or less disciplined in their approach is frequently a fatal flaw.

Haiku #26 Vendor fatigue


Vendors! We are weary.

Cold calls. Power points. Time thieves.

Want time? Send e-mail.

Cold call targets ✔️. Midday, unprompted calls steal mission time though. So be prepared for response. For those who have been following, I may have appeared to “go silent” recently. This was nothing more than an unplanned reset. I needed time to focus on work and health. I also needed time to recharge the mental juices that flow to write. I‘m still just as passionate about the cyber security field. I just needed a break to restore the energy it takes to get up an hour before my already early-rising schedule and articulate cogent thoughts.

I recently saw a post from a fellow cyber executive who said he doesn’t have a desk phone. I thought ”that’s brilliant. I’m going to do that”. Here’s why. On average, I would get 2-3 calls from unsolicited vendors. Cold callers. Many of the products are great, some of them less so. Either way, each of those calls took five to 10 minutes of my time to try and convince them that I wasn’t interested. They each had a script they felt obligated to keep pressing into my day. From my perspective, they were distracting me from my primary mission of supporting the active defense of my environment.

One of my first jobs out of college was in advertising sales and I did a lot of cold calling before search engines, social media or widespread use of email. Back then, there weren’t many tools to research an intended audience, get their attention and build a relationship. The goal was to research the target audience’s industry, curate a flow of information that might interest them and engage them by showing how much work you’d be willing to do for them if they bought from you.

About 10% of vendors who approach me now get this and they build a relationship with me, help me solve problems that I have rather than assume that I have the problem their product or service claims to solve better than any other shiny object on the market. The other 90% spend ten minutes trying to get me to give someone else 15 minutes only to still need an hour a week later for the actual sales presentation. If we look at the math of this: two calls per day is 20 minutes. Add 15 to each and we get to an hour. Then two hours for each presentation. The first hour may seem paltry, but if I can’t see the use case, then its just stealing time away from a schedule that already has a lot in it on any given day.

I see a lot of posts about “discovery calls”. Most of the stuff a vendor wants to learn in that discovery call is really just them trying to woo me away from confidential information without the benefit of even a non-disclosure agreement to protect me from the potential abuse of private information.

I have a budget. I use it to solve problems and I buy new products. But I don’t buy every new product and if a salesperson can only demonstrate value by first expecting me to take my time to explain to them our confidential information about how our business defends itself from cyber threats, then my impression is that the sales model is broken. I don’t need a sales script, written or spoken. I read all the same headlines about various compromises so leading with that as if to generate a sense of fear that a product is sure to solve isn’t a good start either. I would prefer to just have a brief description of the problem your product solves, a history of how that product has performed for others, and a description of the vision for how that product will perform in the future.

Haiku #31 Target selection

Spear phishing attack.

End of day, targets mobiles.

Don’t let your guard down.

Good attacks:

-Specific timing

-Clear objectives

-Economy of force

“Be especially watchful at night and during the time for challenging”. This is one of the Marine Corps’ eleven General Orders. While many phishing attacks are launched at any time throughout the day, the most devastating impacts of those attacks are felt when the victim succumbs to the attacker’s call for action after hours, typically on a mobile device. Why do we continue to fall prey to these attack vectors and how can we condition our communities to better screen incoming email?

Our world has become so connected and our tolerance for correspondence requiring action to sit unanswered so low that emails designed to trick us into clicking on a link “from the service desk” remain extremely effective. Our communities remain susceptible to this type of attack because loss of account access (with whatever data those credentials protect) may inhibit us from conducting our routine business. The first step that might ameliorate this situation is to improve and reinforce communications from the Service Desk. Across organizations, many people still fail to call the Service Desk by its proper name (whatever that may be). Perhaps the Service Desk could publish a monthly tech tips news letter that would increase visibility and help people in our organizations better understand roles and responsibilities. Establishing a clear understanding of how the password reset policy (guided or self-service) can actually be accomplished may also limit the impact of this particular attack vector. Much of this increased correspondence may go unread (we’re obviously past the early days of the Internet where every email was read), but even a small bump in increased awareness may improve the organizational susceptibility to this attack type.

One of the biggest common factors in the success of these attacks is the fact that the email is likely viewed (and action taken) from a mobile device. Attackers know this. The people who packaged this attack type to be sold to others understand our basic frailty on this point and figured out a way to capitalize on it repeatedly over time. I’m attached to my phone. It goes with me almost everywhere, except swimming, bathing or water skiing. It’s practically my one uniting device…I can work from it, analyze my workouts or keep in touch with family and friends. But I am very cautious about just how much email I respond or react to. I rarely click links unless I’m certain of the source. Even then, I’d be more inclined to search for the site myself than to click on the link provided.

My habits have developed over time as a response mechanism for implementing that General Order above. I am especially watchful at any point where my defenses are weakened by the absence of legitimate controls to protect me. We have multiple layers of email security in place…I know because I helped implement them, but some emails still get through. Attackers continue to evolve the vectors that work to bypass our security measures. Therefore I practice extreme discipline when it comes to email reactions because I’ve seen first hand the damage that can be caused by someone clicking on a link for a password reset that was actually a “dumb bomb”…just a normal old phish that sent the victim to a run of the mill credential harvesting site. Our security awareness program is largely delivered by a national vendor for this service, but our programming is specifically cultivated from within their offering to address the frailty of our mobile hygiene so that we can help our community better understand the value of stopping and thinking before clicking any links on a mobile device.

Haiku #30 Cloud security

Simply, trusted cloud

Why do some people assume

Your security?

Just like the ”on prem” world, there is no single solution. Find people who know what they’re doing when securing your cloud apps.

There are so many funny aphorisms about computing in the cloud that it’s hard to find just one that sums it all up best. There’s the comic with the clouds, the father and son and the reference to Linux boxes. There’s the line that goes something like “There is no “cloud computing”; it’s just somebody else’s computer”. Whatever one’s current experience with cloud computing, there are some basic themes that repeat themselves throughout the technological life cycle. These themes did not somehow magically overlook cloud infrastructure, but in the race to adopt the newest thing some may have overlooked the details.

Confidentiality: putting something in a cloud environment only makes it more accessible. Having a set of credentials to get to that cloud environment does not preserve the confidentiality of the data. The same controls that were required in on-prem solutions are still required in the cloud. How can cloud environments be built to preserve confidentiality? First, build a strong identity and access management program for whatever is built in the cloud. You could federate it and tie it to an existing active directory serve. Or you could build something completely new. Either way, the desired end state is controlling who can and cannot access the information, limiting unauthorized disclosure. Do we still want multi-factor authentication? Since that’s the current standard for bolstering identity, then it seems like this would be a great additional layer of protection. Once we start digging into all the details that go into that solid IAM program, it may very well be apparent that the federated approach will probably simplify life for the end user. Fewer sets of credentials to manage. Of course, there’s always a balance with federated credentialing and single sign on because now the broker is susceptible to attack as well as the remainder of the system. But unless the cloud-based system is designed for security people for use by security people, the system will have broader support by the supported population if its use does not become overly cumbersome. We don’t want to forget about encryption. Data in the cloud can still be at rest, in transit or in use. The target should be securing all three states. At a minimum, securing at rest or in transit.

Integrity: clearly preventing unauthorized data manipulation in the cloud requires more attention to detail than when the data warehouses are on site at a corporate facility. In the cloud, the data warehouses aren’t protected by internal physical security (however strong that layer might be). Instead, our focus must be on increasing the technical controls that limit unauthorized access. For instance, while on prem we might have been satisfied with some sort of check sum, we might decide that in the cloud we need hashed and salted data to bolster the encryption. Of course, this will alter the required storage capacity to some extent and increase the complexity of data access and use which in turn impacts the complexity of application interfaces required to allow use of the data. However, the alternatives could be less palatable. No customer wants to find out that data they trusted has been catastrophically altered.

Probably the biggest advantage to cloud computing is the ability to transfer at least some of the availability risk to the cloud vendor. Applications can still break, but the hardware is less likely to and the service level agreements generally include availability statements that would be supported by replication and content delivery nodes. The chief challenge for cloud computing availability then may be determining an update schedule that minimizes downtime. How organizations do this will be driven by many factors: keeping the interface on the cutting edge must be balanced with the task of ensuring that all new updates are fully vetted before applying them. Having a mock and a production cloud environment that are truly identical may seem like an extravagant expense at the time of deployment, but when the project is four years down the line, integrating legally required new capability, and can’t be fully tested except in the production environment (thus risking compromise to live data) is not good practice in software development life cycle management.

Hopefully the rash of cloud compromises in the early days of greater adoption will teach valuable lessons, but hope is not a business plan or a means of maintaining business continuity. Planning usage of cloud resources requires the same amount of planning, installation, operations and management expertise as was required in the on-prem world. The difference is just who owns and maintains the hardware.

Haiku #25 Betrayal

Breach letter names me.

My skin crawls; my heart races.

Is there no relief?

Each person deals with the stress of breaches differently, but there’s a certain amount of trauma. Embracing that reality provides a surreal clarity of mission for those who provide security.

The first couple of times that our accounts are breached may pass by our internal sensors without much notice. Who’s actually heard of Exactis or its breach? I only know of it and what it does because I work in cybersecurity and check up on my own accounts and the accounts of others as a part of our ongoing security awareness program for others. As a courtesy, our security awareness program extended its reach beyond education this year to include helping our co-workers (who are also members of our community) understand the impact of breaches on their personal lives: the accounts that have been breached, how those organizations got their information, and how to take action that’s relevant to them and their families that mitigates that damage of those attacks.

But the first real breach that I was aware of my name and private information being released was the VA breach in 2002. And when it occurred, the rumors started flying across all military units. Perhaps because I was still in the Marine Corps and in a technology leadership role I had access to more information that others. Regardless, the sensation of “what do I do?” was palpable among all my co-workers. The obvious things like credit alerts, change passwords, check accounts for suspicious activity were less clearly known at that point. But we did them because those seemed like prudent steps. And since it was the first one, we really didn’t know what the long play was. Certainly as cyber security professionals we had our informed speculations on how the information could be used to harm us as individuals, but the industry of credential theft, personal information theft, and personal health information theft was in its infancy.

Flash forward to 2018 and I’ve received at least two more breach letters from Marine Corps units where I’ve served. What I mean is that I actually know people in leadership positions at these organizations. Or did at the time of the breach. My sense of outrage at those events was much greater. I wrote letters and essays about the subject that I never sent or published. I knew in my heart that sending them would not likely result in anything. The Marines who made the errors had probably already received what punishment the Corps would mete out and my information was no safer. I retreated to the safety actions I knew best. But in that moment that I first opened the letter there was outrage. I wanted justice. I wanted to see obvious action taken. I did not want to read or hear platitudes about how “we take the security of your information very seriously”. I thought “how could you” if you let this happened?

When breaches happen and our friends and co-workers receive those letters, we must be prepared for the initial backlash. I suspect that my responses to breaches in 2017 and 2018 were not uncommon. My follow-on actions: keeping my silence and allowing the Marine Corps to dole out its own form of justice are probably somewhere in the middle of the spectrum. At one extreme, there are probably many who feel that a class action law suit is the only thing that will get the attention of the organization whose data was stolen. And that should be expected. There are so many data breaches today that proving a specific breach had specific impacts would be more difficult, unless the impacted person has absolutely no online presence.

Depending on the type of organization breached, the sense of betrayal may exceed expectations. In my case, it was my employer and a large government organization. My sense at the time was “they should know better”. That’s what prompted my initial reactions of letter writing and essay writing. I thought certainly a little public embarrassment would be appropriate. In the end, it might not have solved any problems but it would have given me an outlet for that sense of betrayal. As organizations that get breached have an increasingly local impact, we must help our friends, families and co-workers deal with the consequences of that breach. Give them the tools they need to take appropriate action to mitigate the damage. Do not rely on an impersonal breach letter that offers a few months of credit monitoring. Talk to them in church, at the local VFW, or in the line at the next county fair (because the Walmart lines are shrinking with “Pickup”). Wherever you meet them, let them know that they can reach out to you as their personal cybersecurity consultant. They will need that interaction from us. It may not assuage the sense of betrayal, but it may restore a sense of confidence that people they know are working hard to prevent future attacks.