is now [learn more]
PODCAST

Taking a proactive approach to cybersecurity with Laurie Williams

The solar winds hack in December of 2020 is considered one of the largest and most sophisticated attacks known to date the attack which exposed the data of over 30,000 public and private organizations was used as a springboard to compromise a raft of us government agencies. According to experts, this hack could be the catalyst for broad changes in the cybersecurity industry, prompting companies and governments to devise new methods on how to protect themselves and react better to breaches and attacks. In this episode of cocktails, we talked to a distinguished university professor and expert on cybersecurity and we touch on some taxonomies and frameworks that organizations can apply to build their security. We also discuss how we can take a more proactive stance with regards to cybersecurity and take on some great practical advice to make our software products more robust and secure.

Transcript

Aaren Quiambao

Welcome to Coding over cocktails, a podcast by Toro cloud. Here we talk about digital transformation, application integration, low code, application development, data management, and business process automation. Catch some expert insights as we sit down with industry leaders who share tips on how enterprises can take on the challenge of digital transformation. Take a seat. Join us for a round. Here are your hosts, Kevin Montalbo and Toro Cloud Ceo and founder David Brown.


Kevin Montalbo

Welcome to episode 38 of the Coding over Cocktails podcast. My name is Kevin Montalbo. Joining us from Sydney Australia is Toro Cloud CEO and founder David Brown. Hi, David. Good morning.


David Brown

Morning, Kevin. 


Kevin Montalbo

And our guest for today is a distinguished university professor in the Computer Science Department of the College of Engineering at North Carolina State University. She is a co-director of the NCSU Secure Computing Institute and the NCSU Science of Security. Lat. She's also the Chief Cybersecurity Technologist of the Secure America Institute. Our guest for today is Laurie Williams. Hi, Laurie, welcome to Coding over cocktails.


Laurie Williams

Hey, thanks for inviting me.


David Brown

It's our pleasure. So we're gonna obviously be talking about cybersecurity. One of your core ex areas of expertise can we, we start off by asking the question, how can we take the practice of cybersecurity from being a primarily reactive process to a proactive discipline?


Laurie Williams

Yeah, I mean, what I could say with my experience over the last 20 years of working with security is the way to make it more proactive is for people to notice all the really bad things that are happening. I don't see a lot of people doing, making that transition like voluntarily. So when some, there's a really bad thing that happens then, you know, the awareness goes up like back, I think it was 2017. There was during Christmas time, there was a big attack on Target, the Target Shopping Center, which is huge in the US. And then that awareness like, you know, so I work with a building security and maturity model BSIM. And the BSI folks do a survey or it's not even a survey, they go out to companies once a year and really survey what practices the company is used by through interviews, by showing artifacts. It's not just like self reported, they actually have to demonstrate what they do and we analyze that data and like you see a jump up like, so target changed the industry. And so as things happen, then, you know, then organizations increasingly will think I don't want to be in the news. I don't want, you know, we just had on the east coast of the United States, a ransomware attack that took out gasoline, you know, so people couldn't drive their cars so that will raise awareness. So unfortunately, I think that the transition from reactive to proactive is gonna come with more and more of these big attacks. 


David Brown

And then it comes from being reactive to attacks on other organizations.


Laurie Williams

Right. Exactly. And, and then the desire like,, we don't want that to happen to us and then they start to adapt, adopt some more practices, you know, and that is what we say, you know, as we analyze this B SI data, which is really the largest data set of cybersecurity practices done by organizations. It's the largest data set there is it's done out of synopsis, started out being out of sygit, but then the personnel moved to a company synopsis and there's 100 and 25 security practices that they assess and that, you know, we ha we actually have access to the raw data and we're analyzing it. And you know, we see these jumps happen when these big incidents happen. 

So people will be more proactive. Another thing that may turn people to be more proactive, like there was just a big executive order in the US US executive order on cybersecurity that came out in May and they are prescribing what I would classify as proactive practices. And so like if, if the US government is saying, we won't buy from you unless you do these proactive practices, then companies will do it at least the ones that supply the US government and lots of companies supply the US government in some shape or form. And I know I'm speaking very US centric. the B some data is worldwide data. But I, I think that to some degree, the standards, the n standards, things that happen in the US relative to cybersecurity are spread through the world.


David Brown

Absolutely. I think that the n standards you wish you refer to, perhaps we can go through some of these. I know you take a very scientific approach and data driven approach to security. So how does an organization go about systematically developing a prevention, detection and response patents for their security requirements?


Laurie Williams

Yeah. So for the security requirements, I mean, you know, 11 study that or one angle that we took a number of years ago was to use a natural language processing algorithm to read the functional requirements of a of a product and to match them up with the security controls in the N 853 standard. And so it would like look at, look for keywords in the functional requirements and then match them to the patterns and then suggest security requirements. Like for example, a requirement might be a doctor, it edits the patient record and from that natural language, you know, requirement, which is functional, it's functionality that the product should have. This would say, OK, if the doctor's gonna edit a patient record, the doctor must be authenticated, the transaction must be logged. Like, so it'll tell the development team the associated security requirements based upon that functional requirement. 

So, you know, that that's one way, but I, I really do think increasingly organizations are providing, you know, pretty, pretty extensive taxonomies of security controls, which security requirements, that's, that's what they often use the security controls that companies can use and to be kind of exhausted, exhaustive, as they're trying to be proactive, you know, in the prevent detect response. So, response is,, my gosh, we got attacked. What should we do? Like, that's not that great , detectives, you know, their vulnerabilities in there. How do we see if, how do we get them out before someone else finds them in best cases when they're preventative? 


David Brown

And there are some blueprints to get,, get companies started.


Laurie Williams

Are there blueprints? Well then this cybersecurity framework is a good way to get companies started. I guess some of the ways I would say a blueprint, so that it sets a risk based model and so not all companies, you know, they're starting from different places. And they also have different risks. So like, you know, depending upon the product itself and the product that the company is producing, they, you know, would consider themselves higher or lower risk that in this cyber security framework does a little bit of that risk assessment. It's not that specific though, but it, it, it is good for getting a company started. I'd say another couple ways to get started. So I mentioned the bim the building security and maturity model. So it's a taxonomy of 100 and 25 practices and the maturity model says there's level one practices, level two practices, level three practices. And so the level one, so with the level one, so I'll, I'll tell you a competing way to look at it in a moment. But level one practices would be, these are the practices most organizations do and then level two is and more advanced companies. Level three is the most advanced. So if you're just starting out looking at the level one practices of the PM is a good start. There is another, so they had the similar origins, but there's another taxonomy called Sam or Open Sam which is comes out of the OAP organization. 

And they have level 12 and three as well and the mm is maturity model as well, but they have a different tact and what they're saying is more like it's prescriptive like you should do level one and then you should do level two and then you should do level three and you should develop a procedure to get yourself to level two and then to level three. So you should set your goals to increase the maturity so similar, but you know, similar something like 100 and 25 practices and, and, and a way to advance through it. So that means companies don't have to start from scratch, you know, they can, they can go and look at the types of things that are in these maturity models and start to say OK, you know, we should adopt these things and they're also pretty wide varying from developer practices, management practices, training practices, compliance practices, governance practices or, you know, all in there. So those are good ways.


David Brown

I think some of our listeners would be familiar with Nist, for example, because Nist, for example, in, in, in software development, publish vulnerabilities in you know, software frameworks and libraries often used the build process to identify potential security development and create alerts for those in a proactive but BC M I think a lot of people may not have heard of. So it's, it's an organization, a collaborative organization which people can join as a membership. So can you tell us a bit more about the organization?


Laurie Williams

Yeah. So that's where the two, the BSI and the, the Open Sam are, they had similar origins and then they split now the BSI is not, it's not membership or it's not what you just described. It's a framework where consultants from the organization can come to your business and help assess you and develop a plan. So that's one the opens A is where, so opens A comes from O A, the open Web application security project OA which, which is a nonprofit. And so that's more of what you're talking about. You know, there are some other N I and IO standards. but I think that the nist 853 security controls is also a good starting point. Nice comprehensive list. So I think any of those, I, I think that looking at the UN, unless, you know, if the organization wants to pay to be assessed to the BSI, that's great. If they just want to look at what has been published about it and what, what are, what are the 125 practices? That's thing. 

One thing I was opens A has more like spreadsheets, you can download that, help you develop a plan and then the SEN 853 security controls. All of those provide a good framework for people to get started. Another thing that I really like is is again by OAP and it's A SVS which is application security verification standard A SVS and that's more technical. So that's saying like kind of enumerating 100 and 36 if I remember, right, different things you should test for in your product. So it's much more developer centric, not governance, not anything else. But, and so, you know, when everyone's trying to figure out like what are all the things I should do starting from scratch and developing your own standard is not, not a recommended practice, you know, go to some of these NIST and oas resources that are available.


David Brown

I was looking at your paper that you co-authored establishing a baseline for measuring advancement in, in science of security. I was interested in this concept of establishing a baseline and in that you mentioned that we need to establish some scientifically founded design principles for building in security mechanisms from the beginning. What, what are these principles look like?


Laurie Williams

Yeah. So I mean, so there are principles that have been around. So there's a, it was a famous paper written by Salzer and Schroeder back, I think in 1976. and it's full of de design principles and I can't, I mean, I think all of them are still valid. And so I won't go through all of them but, you know, some like least privilege, which says every person should have the least amount of privilege possible. So design that in designing minimizing trust. So don't trust anyone only give them things that they absolutely need defense in depth. So assume that the Attackers are going to get through your first line of defense and make another, you know, make multiple lines of defense. So a complete mediation is one where you need to check access. So like you continuously check that the person is who they say they are, don't just assume if they log in, they are the person that they say they are like keep checking. So there's, there are, you know, a good 12 or 13 design principles that have been around for quite some time. 

And so actually developing mechanisms and frameworks to support people using those types of principles is important. So there's not really a need to develop more new principles. It's really to adhere to the ones that we know about. But that paper that you referenced, establishing a baseline. So, I've worked with the national security agency, the NSA for more than 10 years on a Science of Security project. And the, the basis of that project is that the NSA would like for researchers to be more principle based. A lot of research these days is very reaction, you know, so prevent detect response. A lot of the research can fall into the response, you know, the, the Attackers just did this. Now, we need to have like supply chain, that's, that's the thing. Now, we, you know, we had some big, you know, solar winds and some other big supply chain attacks. So now like that's the thing, but that's reactionary, that's, you know, response based research and NSA would really like us to be more prevention based. And and, you know, sees the, the research community as not being as, as principle based. And so that paper is about how do we as a scientific field of security report results so that other people can build upon our science. And so, you know, that, that's why we're establishing a baseline.


David Brown

So speaking of which you, you, you, you talked about supply chain being one of the hot topics of the moment in the news recently, President Biden met with Putin and mentioned cyber security and cyber attacks and, and mentioned some lines in the sand, the, the lines in the sand, as I understand, weren't actually published as to what the lines in the sand were. The, these are the, the areas or entities which we would consider , yeah, a line in the sand which if you undertook a cyberattack. But if you were to have a guess, what sort of areas would you be imagining infrastructure and military and obviously obvious candidates? But would there be some non obvious candidates perhaps?


Laurie Williams

Yeah, I mean, so certainly, you know, military and government, you know, like there, there are definite accusations and I'm not sure maybe even proof that the, you know, when the, that the Russians tampered with the election. and, you know, and got in and look, you know, exploited Hillary Clinton's email for example. And so that's, that's a government and, and interrupting the political process. And you know, so then, you know, a non government, you mentioned like non-government non, but you know, is the solar winds which is somewhat government. So solar winds established, you know, a Trojan or some, you know, a hack that was able to get into the supply chain and then opened up the doors for government, go, you know, the Pentagon and some other government organizations as well as some big companies like, like Microsoft. And so that, that is a case where likely Russians opened up the doors to cause damage both at the government and the, you know, the industry level. 

So and even the, you know, I I'm not sure how many of these things get across the whole world, but the colon colonial pipeline that was believed to be people from Russia. And, you know, it's interesting and I saw the the pres a president of Microsoft Brad Smith was mentioning in a blog that cyber espionage just making money off of this was a $19 billion business and the people who launched that colonial pipeline attack that, you know, caused people like me to have trouble getting gas said, you know, we really didn't mean to do that. We just wanted to make some money. So like that's, that's kind of the next wave like this economy of just causing these cyber destructions to make money. And so, you know, I think that I, I'm not sure, of course, like similar if we don't, if no one knows what Biden exactly said it, it really spans all of that, like what a nation state attacks at a defense level, at a government level that disrupts government processes as well as harming citizens and companies which, which they've shown to do all of those.


David Brown

I'd like to ask a more practical question,, on the ground type question. If you like, the security breaches are often discovered through log files. So the question then becomes, what should we be logging in our application? And, you know, if we really want to be able to find anything, should we just log everything? Yeah, or other implications for that as well.


Laurie Williams

Yeah. And you know, so we did do some work with logging. And you know, one of the things we we showed with our work was that a lot is not logged. And so, you know, you know, we had papers like modifying without a trace. So a lot of things aren't logged and logging in the general case in computer science has originated from debugging, logging to debug and not logging for forensics. So, you know, we forensics by design is a new field. We coined the forensic ability, the ability for an or a product to enable forensics. So, you know, these are all things that people do need to be considering much more. And so should we log everything? You know, not for sure, not everything. So, disclosing data through logs is another attack vector. And so you have to be careful about what you log and not, not log any sensitive identifiers, like can't use the social security number in the US as the unique identifier because now that's sensitive data in itself. So like in the, in the general case, so crud create read up update and delete when someone does those and, and view, which is another thing that we determine. So, it is as long as you can say, who saw something, then you need to log that as well. So some identifier of who did it create a read and update, delete or view, you know, is, is important watching for not logging so much that you create a new attack vector. And it's, it's hard, it's actually hard to decide what to what to log and what are, what are the data fields that you need to log. So these are not, you know, these, these are still open research topics. When my students were working with logging, it was, it was funny to me, it's one of my funnier memories as an advisor was they went through a medical application and, and the requirements for the medical application. And it's similar to the other product I described to what we were trying to do is to be able to read the requirements for a system. And then based upon some heuristics recommend what should be logged. 

And so what my students were doing was, you know, looking at the transactions and coming up with the heuristics and then applying them and, but they had between students, they had some disagreements and I was, like, come on, bring them to me, I'll resolve it, like, you know, I don't know why you guys can't figure this out and then they brought it to me and I'm like,, yeah, I don't know either. You know, and so then the three of us were like, you know, making our best guess. So it's, it's really not that straightforward. You can read a requirement and be like, hm, so, you know, still work needs to be done. I, I do think that there is definite the potential for natural language processing to aid in the logging, what should, what should be logged process? is that there's definite potential there and then to watch for not disclosing information in the logs, not allowing your logs to be altered, you know. So they're right, only they're backed up things like that.


David Brown

We are a software company ourselves and software companies and organizations which have a team of software engineers will often create a culture of focusing on adding new features into a can that culture lead to a danger where if you're not, if you're focused on features as opposed to fixing noncritical security issues, result in what you call I referred to in the past as security technical debt.


Laurie Williams

Right. Yes, absolutely, absolutely. And there's not actually, I did actually a keynote on a at a conference called Tech Debt. It was one year ago. So, things could have changed. But at the time, you know, I looked for all the papers far and wide. We look at all the papers that address security, technical debt and there really weren't. So it's not, it's not an issue that there has been much study on. But certainly your company and most companies really do focus and reward the production of functionality and you know, so the cognitive overload of a typical software engineer when they're having to build security into a product and then running static analysis tools and fuzzing tools and getting notifications that components that they use have vulnerabilities like getting all of these signs from all over the place really does cause cognitive overload. And then, you know, can cause technical debt because more likely they're like, this is too much. I'm not doing anything. I'm just gonna produce my functionality. And so that human aspect of security is is again an area that needs a lot more focus so that they're getting the the strongest signals, the engineers are getting the strongest signals of what they need to deal with so that they don't create the security technical debt and reducing the false positives which a lot of tools cause alerts with false positives.


David Brown

Yeah. And speaking of those false positives, I mean, like I mentioned before, we use the NIST database to identify vulnerabilities in libraries during the build process. But you've also written about artificial intelligence being able to potentially assist organizations deploy more secure software products as well.


Laurie Williams

Right. Right.


David Brown

You see that in practice people using more and more using artificial intelligence to build security.


Laurie Williams

Yeah. Yeah. And so a number of ways. So like if you put natural language processing in the, you know, the AIcategory I mentioned a couple of the projects, you know, that we've worked on like what to log being, you know, NLP or what's, what's your security requirements? So there's other people doing things like that, but there's a lot of also learning algorithms like something that we've done and a lot of other people have done is vulnerability prediction models. So based upon features, what, where should you look for your vulnerabilities? Like, you know, what are the signals that say there is, there are vulnerabilities here, you should look here. So there are a lot of people doing that. You know, mining logs to find, you know, like, so logs are terabytes and terabytes and terabytes and you can log all you want if you never look at the logs, then, you know, you might as well not have them. And so like, that's definitely an AIapplication is identifying the anomalous behavior in the logs is another AII mean, even what you talk about like, so looking at the components that have vulnerabilities in the National Vulnerability Database. I think that's what you're, you're saying that your company does. And that's like the, the most, I mean, it's the most rudimentary, I mean, that whole field. So it's called sc a secure component analysis is is very complicated. 

So the National Vulnerability Database is in most cases, the beginning part and so tool vendors in that space are reading, using natural language processing to read like bug databases and security advisories to identify vulnerabilities before they get reported into the national vulnerability database. So that's an AI kind of thing. And, and then another aspect which is probably not. So AIrelated is people don't want to be notified if the component has a vulnerability, if that vulnerability has nothing to do with the functionality that they use from that component. And so like, you know, trying to identify that which may require dynamic analysis is another aspect. But you know, the AIinsecurity software security is an emerging field. I I actually have done one keynote in that space In I think February and I have two more this year talking about just that topic, the Union of cybersecurity Software Engineering and AIand what are people in the world doing about that? And there's all probably, you know, we do some my group does work in that, in that space that the Union of those three areas and some researchers in Singapore do as well. And then in Luxembourg, so in Europe, but there's a lot more, a lot more that can be done.


David Brown

Like you say, if you have volume of data, which you're dealing with, whether it be log files or transactions or whatever it may be. you simply do not have any alternative other other than to use sort of some machine learning techniques.


Laurie Williams

Yeah.


David Brown

Issues. So it's basically as you say, a necessity. Otherwise those log files are used to try and identify the cause of an event, post the event. 


Laurie Williams

Right. Right. Right. Yeah. So like when you get to, I mean, there's really like, so the classic case to be, you know, it's a movie star in a hospital and it gets into the news, the news, how, how did that, who found that out? Look in the log files? So in one case you can find out the nurse or whoever that found it and that's in the log files, but you only looked because it was in the newspaper. And then the worst case is when you try to look and you can't even find out who looked. You know, there's no, there's no trace of who looked. But you know, if there's like I said, terabytes and terabytes of log files and not a learning algorithm to identify, then then it's a waste. And you know, the analogy credit card companies, like if anything we get more calls, like, you know, to just say, is this transaction really you like, that's really what we have to get to is that, you know, we're being proactive like that or, you know, something that, that we've proposed in the past too, is that like if someone looks at something, someone gets notified, there's kind of a chain of like, you know, the chain of command where, you know, if someone looked at a patient record that has nothing to never been on the floor of that patient never performed any service to that patient, someone should be notified right away. 

I have a system that we've developed in a classroom, a medical record system and in that system, it's a fictitious system though. It's quite large, you could click on a button and find out anyone who's ever looked at your patient, your record. So then if you work in a hospital and, you know, if I look at somebody's record, I'm not supposed to, they can find my name out by just pushing a button. Maybe you won't do it. So it's kind of a deterrent if, if we have those types of actions and software just knowing that there's transparency, you know, similar to like, you know, if you know, there's a security camera, like I'll get a fictitious example of, you know, you have a supply room in your work and you go in on a Saturday, you could collect your child's school supplies from the supply room. But if you, if there's a security camera you might not do it. But if there's not a security camera and no one's around, sure. I might do it similar with, with medical records or other applications. If you think you can do it, no one will ever know. Or even if it's logged, no one looks at the logs anyway, then it's not a deterrent. But if you know, like I could be easily found out, then you might not do it in the first place.


David Brown

Laurie Williams. But you've published hundreds of papers and, and, and the like, but do you publish on social media and if so where and how can our listeners follow what you're reading and writing about?


Laurie Williams

Yeah. Mostly Twitter from a professional standpoint, Twitter. Yeah.


David Brown

Yeah. And your handle is Laurie


Laurie Williams

Williams. That's right. Yeah.


David Brown

Great. Well, thank you so much for your time today. Laurie. It's been a pleasure talking to you about cybersecurity. It's a big topic and you've written an overwhelming amount of material. It's, it's a, it was very interesting researching today's topic. It's a fascinating area and I think something that we don't talk enough about. So, thank you for coming and joining us on the program today.


Laurie Williams

All right, my pleasure.


Kevin Montalbo

All right. That's a wrap for this episode of coding over cocktails. To our listeners. What did you think of this episode? Let us know in the comments section from the podcast platform you're listening to. Also please visit our website at www.torocloud.com for a transcript of this episode as well as our blogs and our products. We're also on social media, Facebook, LinkedIn, Youtube, Twitter, and Instagram, talk to us there because we listen, just look for Toro Cloud on behalf of the team here at Toro Cloud. Thank you very much for listening to us today. This has been Kevin Montalbo for Coding over cocktails. Cheers.


Listen on your favourite platform


Other podcasts you might like