Security

All aspects of IT security as it relates to the federal CIO role.

For Matters of National Security, Intelligence Community Turns to Silicon Valley

Emerging technologies are integral to solving federal challenges, and many of the solutions originate from Silicon Valley-type startups. But does the intelligence community rely on these startups to help solve national security issues?

John Kammerer, the technical director of high-performance computing solutions at the National Security Agency, said when it comes to computing, big companies depend on component developers at startups.  

“We really look towards the startups down at that level because in the end, we’re trying to build a system out of what we really need to have, all the components focused on the mission problem.” Kammerer said at last week's NVIDIA GTC event in Washington, D.C.

Artificial Intelligence

CXO Tech Forum

NSA still works with big players in the artificial intelligence and high-performance computing industry, but innovation really happens at the developer level. Big companies are even snapping up startups for that reason, to acquire a technology capability they’re missing.  

Dawn Meyerriecks, the deputy director for science and technology at the CIA, said her agency also benefits from working with startups.

“We’ve found relationships with . . . startups a very, very useful one in terms of shortening time to market for us, and also getting products early that we can use for our challenges,” she said in a panel with Kammerer.

But what exactly do federal agencies look for in startups, and how do they bridge the gaps of bureaucracy?

The CIA partnered with In-Q-Tel, a technology accelerator and venture capital firm that identifies the startups with a potential for high impact on national security. It works with government customers and the venture capital community to connect federal problems with the right solutions.

“When I came into government, a large source of innovation was government,” Meyerriecks said. Now, a significant amount of CIA research and development dollars flow into this innovative community. It’s working really well, and yields a better internal rate of return.

“If we want to ride the curves, which we think is really important for our relevance, then we had to figure out a way to do this,” Meyerriecks said about the partnership with In-Q-Tel. It took some time to get the model right, but based on the CIA’s interactions with the companies it works with and its adoption rates, “from a mission perspective, it has proven its efficacy over and over and over again," she added. 

The CIA doesn’t have many restrictions on the startups it is willing to work with, either. After identifying mission needs, In-Q-Tel surveys the market and brings the best to the table. Aside from obvious red flags, Meyerriecks said they start with the best technology, form a relationship with the company and then work together to solve national intelligence challenges.

“Government is slightly more open minded than it used to be,” she said, and provides meaningful problem sets benefiting emerging markets. “That’s always a good thing for a startup to have . . . a real problem that matters to get going." Because once the CIA is in, it is all in, and could even lead to an enterprisewide license.

In fact, Meyerriecks doesn’t want companies making exclusive solutions specifically for the CIA, either.

“We want the commercial marketplace to produce things that we can ride long term, and we don’t want to have special one-offs,” she said. “That’s kind of the worst possible solution.”

If she finds a security hole, for example, she will work to fix it, but that becomes everybody’s fix, not just the CIA’s.

“We don’t want to have one-off specials for the things that we’re licensing as commercial products,” she said.  

Kammerer agreed, adding he doesn’t want to offroad a company’s roadmap completely. If anything, he said, “we’re just trying to add some features to a certain technology so it gives us the mission benefit we’re going to need.”

For example, Orbital Insight takes massive amounts of geospatial images from various providers and runs it through algorithms to find and analyze national trends and statistics. So when a specific customer looks for a unique trend, it’s a small tweak to the company’s large pipeline of image processing data.

According to James Crawford, the founder and CEO of the company, algorithms can already detect the number of airplanes at a commercial site. If a government customer rather see the number of airplanes on a military base, the object detection is the same; it’s just the geography that is different.

Yet, startups and agencies still face adoption challenges. Bureaucracy and clearances get in the way, and federal contracting language can be hard to navigate.

“I think our adoption rates are still too slow,” Meyerriecks said, largely because of the acquisition processes. One way the agency tackles this is with an internal version of an on-premise Amazon Web Services installation it calls Commercial Cloud Services, or C2S.

“The goal for that working with In-Q-Tel was to make it easy for companies to deliver product if it's software-based directly into AWS, thereby making it easy for us to get it into C2S,” Meyerriecks said. It’s a helpful step, but if the agency can then get the products in on an experimental basis to monitor use and eventually bring it on an enterprise level, it will accelerate adoption.

Survey Says Millennials Care More About Cyber than Boomers Do   

Technology itself creates an inevitable generational gap, as it’s been ubiquitous for most teenagers and twentysomethings growing up as digital natives. Baby boomers, however, have had to learn how to use and adapt to new technologies just to keep up.

It seems as though this divide can be applied to our willingness to actionably support federal cybersecurity efforts, too. Millennials might not want the government to spend more on cybersecurity, but they are more willing to do things like take greater responsibility for the security of their devices when compared to baby boomers, for example.

These findings are according to recent survey results from Accenture Federal Services, which asked residents of D.C., Maryland and Virginia about their thoughts on the state of federal cybersecurity. Though 55 percent agreed the government should be more transparent and proactive about data use and security, the remaining results showed an interesting generational gap.

Artificial Intelligence

CXO Tech Forum

 

Text and charts 'millenials are more likely than older generations to support government cyber security efforts'

 

4 Ways Artificial Intelligence Can Fortify Cyber

Cyber news in recent weeks highlighted the accelerating cycle of attack and counterattack that goes on in the alleyways of the web. It also offers clues to how artificial intelligence systems can help keep cyberthreats from doing extensive damage.

This month, a botnet was reported to be quietly but quickly spreading among internet of things devices around the world, with the potential to “take down the internet,” at least temporarily, by using millions of compromised devices to launch distributed denial-of-service attacks. Dubbed IoTroop, as well as the more Halloween-themed Reaper, the botnet had conscripted at least 2 million devices in more than a million organizations as of Oct. 19, according to Check Point Research, which first identified it.

For the moment, IoTroop is laying low, in what Check Point described as the calm before the “next cyber hurricane.” Another security firm, NewSky Security, reported hackers had been developing attack scripts for the botnet. Reaper appears to be a more sophisticated attack tool from the same group that last year used a botnet worm called Mirai to launch a DDOS attack that left some major websites — including Twitter, Netflix and The New York Times — offline for several hours.

Artificial Intelligence

CXO Tech Forum

For government and other organizations looking to protect their networks, data and uptime, this development, sadly, is business as usual.

Threats are always creeping around the internet below the radar of most people, avoiding attention until it’s too late. A day after Check Point’s disclosure about IoTroop, for instance, the Homeland Security Department’s US-CERT and the FBI warned about an advanced persistent threat “targeting government entities and organizations in the energy, nuclear, water, aviation, and critical manufacturing sectors.” US-CERT said the intrusion campaign was used to gain access to the networks of major players in the energy sector. It was a pretty routine announcement.

Attacks such as IoTroop can often be mitigated once identified. The vendors of targeted devices have been issuing patches to fix the vulnerabilities in their devices. But the attackers behind IoTroop also are updating their exploits, potentially leading to an ongoing loop of software fixes.

The biggest problem is that many devices, particularly those connected to the IoT, are largely left unprotected in the first place. Mirai spread by exploiting manufacturers’ default user names and passwords to gain access, while Reaper targets unpatched vulnerabilities.

What are some of the ways AI can help in this accelerated cycle?

Automation

Talking about AI in cybersecurity for the moment mostly means talking about machine learning, a subset of AI in which machines can learn from example and reach conclusions or take actions they have not been specifically programmed for. Automation increases the speed and breadth of a system, enabling it to handle the cloud-based big data techniques being employed to counter cyber crime. The learning aspect helps systems recognize the variety of methods employed by phishing or other malware attacks. It also can help in isolating malware that has entered a network or in identifying the changes in normal network behavior when malware executes.

Threat Detection and Response

Speed is of the essence in fending off an attack or mitigating its damage — and the closer to doing it in real time, the better. AI and machine learning can help by automating and refining the process. Vectra Networks surveyed 459 IT pros at last summer’s Black Hat conference about their organization’s security operations centers. The upshot from Vectra’s results: When comparing detection and remediation times among three groups — teams of 10 or more analysts without AI, AI-only systems and teams of 10 or more analysts working with AI — Vector found AI-only systems responded quicker than human-only teams, but the best detection and remediation times came from teams of analysts working with AI.

IoT Security

Devices on the internet of things — such as medical monitors, weather gauges and cameras targeted by Mirai and Reaper — are typically low-power devices without much processing capacity, which can make them difficult to secure through conventional security techniques. But AI and machine learning algorithms are being developed to monitor IoT devices to detect signs of unusual behavior.

Keeping Pace

Information technology, being by nature readily available, cuts both ways. The history of the internet demonstrates that — the first computer worm turned up in 1988, when the internet was still the research-oriented Arpanet and consisted of 88,000 computers. Quantum computing is still largely on the drawing board, but the National Institute of Standards and Technology is already anticipating quantum computing attacks on current encryption standards. Likewise, with artificial intelligence. Cybersecurity experts say hackers already use basic AI techniques and expect them to leverage more advanced AI to customize attacks or make them more adroit at avoiding detection. Future network security may depend on fighting AI with AI.

Despite Slashed Cyber Budget, HHS Fights Off Billions of Attacks Weekly

The federal agency responsible for housing and protecting hundreds of millions of Americans' most sensitive health data spends less on cyber defenses than recommended but manages to successfully ward off a bombardment of cyberattacks every week, according to one of its top officials. 

The Health and Human Services Department has all the health data of those using Medicare, Medicaid or health insurance through the Affordable Care Act, and at least one-third of Americans’ personally identifiable information, according to HHS Chief Information Officer Beth Killoran. This treasure trove of data makes HHS a primary target.

“If that doesn’t say that we need to make cybersecurity our No. 1 priority, I don’t know what it is,” Killoran said at GovernmentCIO Magazine’s CXO Tech Forum on Oct. 19. And because attackers today go after people's health history, rather than credit card and Social Security numbers, data protection is more important than ever.

Artificial Intelligence

CXO Tech Forum

Consider the Food and Drug Administration, an HHS component with over half a billion breach attempts a week — and that’s just one operating division.

“Imagine how many we have to fend off on a given week,” Killoran said. “And so if you look at that and you look at how much we’re spending on cyber, it’s just monumental what our staff is able to do.”

On average, departments are recommended to spend about 6 to 8 percent of its total IT budget on cyber, but HHS is spending about 3 or 4 percent, and continues to fight off sophisticated breach attempts without compromising data, Killoran said.

That’s in addition to making sure the department is doing what it has to do around Continuous Diagnostics and Mitigation and its EINSTEIN system, which detects and blocks cyberattacks from compromising federal agencies.

Though Killoran understands what HHS has to do from a federal perspective, it’s more than just thinking about — or reacting to — cybersecurity in terms of a mandate or audit. It’s about the need to adopt a strong risk management model and fully understand threats and risks, to be proactive rather than just reactive.

Killoran said it starts by identifying high-value assets, modernizing them and building in protection capabilities. Similar to industry’s concept of quality control or quality assurance, the ability to hack into a system is a vulnerability, and that vulnerability is a quality assurance problem; so it’s making sure those problems don’t exist.  

GCIO Focus: Winning the Cyberwar with Zero Trust

The sustained high frequency of successful cyberattacks against corporations and government agencies has made one thing clear: Perimeter-centric security strategies are no longer effective.

With insider attacks, data and IT infrastructure residing in multiple locations, and data traveling across the internet, relying on one layer of security at the perimeter is no longer an option. Zero Trust is an alternative security model that addresses the shortcomings of failing perimeter-centric strategies by removing the assumption of trust from the equation.   

With Zero Trust, the focus shifts from thinking about security in terms of “trust but verify” to approaching security from the stance of “never trust, always verify.” Google and many other security forerunners have successfully adopted this model, which was also recommended for federal implementation by the House Oversight and Government Reform Committee in 2016.  

Artificial Intelligence

CXO Tech Forum

A Zero Trust network treats all traffic as untrusted and focuses on creating barriers that compartmentalize different parts of the network. To embrace the tenets of Zero Trust of the network, security architects must redesign their segmentation around business needs to effectively protect against attacks. This approach protects data from unauthorized applications or users.

When implemented properly, the Zero Trust architecture:

  • Ensures all resources are accessed securely regardless of location

  • Employs a “least privilege” strategy that strictly enforces access control

  • Inspects and logs all traffic

While other benefits exist, implementing Zero Trust provides:

  • Significant improvements in mitigating data loss and detecting and averting future threats

  • Material cost savings for IT security

  • Enhanced capabilities for digital transformation initiatives such as mobility

  • Efficiency in meeting security and privacy mandates

Zero Trust is a new weapon in winning the cyberwar and is “the first step in restoring confidence and security in federal information technology.” To schedule time to learn more about the steps to Zero Trust success, please contact Mark Western, mwestern@governmentcio.com

Cloud Cuts Application Development Time for Air Force and DHS’ CIS

Federal agencies are moving to cloud services to focus less on data centers and infrastructure, and more on application deployment. For U.S. Air Force Deputy Chief Information Officer William Marion II, this means being able to deliver IT at commercial speeds with agility.

The Air Force has a very mobile scale, with 700,000 endpoints on premise requiring efficient IT and cyber support and 2,000 business systems. Its journey to the cloud began two years ago, and it has since adopted five or six main services to deliver as an enterprise across the Air Force.

The Air Force also leverages Defense Department Federal Risk and Authorization Management Program-Plus (FedRAMP-Plus)-authorized platform-as-a-service, software-as-a-service and infrastructure-as-a-service solutions, which add advanced security requirements for DOD.

Artificial Intelligence

CXO Tech Forum

“Right now, we’re really focused providing global, at-scale enterprise services,” Marion said.

However, migration to the cloud is only one part of a larger initiative within the Air Force.

“That transformation of IT isn’t just about providing more security, more agility and more speed,” he said at the Amazon Web Services Public Sector Summit on June 14 in Washington, D.C. Moving to the cloud also allows industry to drive the innovation and scalability of the infrastructure, freeing up resources so Marion’s workforce can spend more time on cybersecurity business and operations. This means improving data security and application security.

The U.S. Citizenship and Immigration Services also plans to expand its cloud use, according to Sarah Fahden, the associate chief of the USCIS Verification Program. Speaking at the same event, Fahden said her team is modernizing the E-Verify and SAVE applications into the AWS cloud, while moving the legacy system into the same environment to eliminate the datacenter. These apps validate and verify employee and benefit-applicant immigration statuses.

For the Air Force, cloud also replaces the lengthy process of provisioning software and platform service environments, speeds up application deployment and allows users to spend more time on application functionalities and capabilities. Ultimately, it’s more important to deliver the application to the end user, Marion said. As for security, he said it is “far beyond everything else that we had running."

Fahden also argued security is better with the cloud by enabling a quicker reaction to security events, “in ways that you never would have been able to do in a datacenter." USCIS is able to log much more data and store it cost-efficiently, and the cloud conducts security scans and daily monitoring for every single asset and tool implemented.

The cloud is providing a roadmap for other innovative initiatives as well. The Air Force hopes to leverage the cloud to tackle its next core concern, operational readiness from logistics to personnel, by layering big data analytics and data mining capabilities.

USCIS is building a Person Centric Query Service so users can submit a request and see all transactions involving an immigrant or nonimmigrant across DHS and external systems. The service also gives a view of the person’s past interactions with DHS and other agencies as he or she passed through the immigration system.

As part of the verification modernization, Fahden said she realized the program had many manual cases and not enough time to process them all, and data was coming in from all different systems throughout USCIS. The agency has already made progress in the past few months by consolidating and migrating records to the PCQS, and hopes it will improve the quality of data coming into the Verification Program.

“For immigration though, it’s endless, the possibilities,” Fahden said.  

Commercial Cloud Passes the IC Test

CIA's adoption of commercial cloud services is rocketing the intelligence community’s capabilities and resulting in unprecedented innovation, CIA Chief Information Officer John G. Edwards said at the June 14 AWS Public Sector Summit in Washington, D.C. Edwards went so far to say moving to the cloud “was the best decision we ever made.”

The agency's high-profile move to the cloud began about four years ago when it chose Amazon Web Services to build a private cloud for the IC’s 17 agencies. AWS operates the entire C2S region on the CIA’s premises, but this cloud is not connected to the internet.  

The decision to move to C2S was simple.

Artificial Intelligence

CXO Tech Forum

“We want to be like commercial, we don’t want to be like government,” Edwards said. Starting this process was challenging. Too often, government pushes contractors and partners to act like more like government. That’s not what Edwards wanted. He forced the agency to act more like industry.

This approach is reaping successes for CIA and is having a “material impact on … the entire IC.” This rippling effect can be attributed to the six C2S superpowers changing the way CIA does business, according to Edwards:

  1. Speed: C2S provides the agency with infrastructure at the speed of mission. Before bringing the cloud on premise, provisioning a server took as long as 180 days. With AWS, it only takes minutes. “That’s a game changer for us,” Edwards said.

  2. Power: C2S is more powerful than CIA’s toughest mission challenges. The agency is bringing new services and features into its cloud at a commercial-matching velocity. It also provides a classified IC Marketplace (similar to AWS’ commercial marketplace) that has more than 100 applications with 70 more in the pipeline. CIA analysts and developers can download apps in minutes, test them and lease them if they meet mission needs. It eliminates the need for market surveys and lengthy, complex acquisition cycles that took months or years. The cloud also has a DevOps Factory with development tools for writing higher-quality, more consistent and secure code. It now has more than 4,000 developers across the community.  

  3. Scalability: C2S makes it possible to scale vast infrastructure in seconds. The cloud provides unlimited capacity and instant provisioning to stand up and run new apps and capabilities, where in the past, doing so in a data center wasted time and resources. With the cloud, the CIA is saving dollars, avoiding unnecessary costs and getting mission impact. It now has apps that don’t recapitalize on hardware, take hundreds of dollars to run a day (rather than thousands, and only minutes to run.

  4. Strength: C2S provides secure services with insight into configurations and infinite audit. “I’m never going to say anything you do in the cyber world is totally invincible, but this is pretty close,” Edwards said. The agency took this battle-tested cloud connected to the internet and dropped it behind its “guards, gates and guns,” and disconnected it from the internet, potentially making it more secure. AWS and CIA added even more security overlay controls and audits to meet specific system needs. “I would argue that this is probably the most secure thing there is out there,” Edwards said.

  5. Durability: C2S provides assured resilience and integrity with high-performing availability. CIA has a complete region with three availability zones, or three geographically dispersed data centers. So if one of those zones fails, the entire cloud region won’t. “That’s built in, we don’t have to think about it anymore, and that’s durability,” Edwards said.

  6. Truth: C2S provides the vessel for advanced analytics through artificial intelligence and machine insights. In the cloud, the agency can see inside its data, but also get the value, context and meaning of the data, and integrate it rather than just collect it. “We’re now discovering things that was never before discoverable,” Edwards said. Developers, data scientists and analysts are able to run analyses on complex datasets and work on difficult tasks and problems in the cloud.

For CIA, the opportunities with C2S don’t end here. Edwards said he’s exploring the use of artificial intelligence and new intelligence services to improve the accessibility of systems to its entire workforce, including those with disabilities or special needs. He’s excited to bring those capabilities into the IC community.

Congressman Jim Langevin’s 3 Cybersecurity Priorities

Rep. Jim Langevin, D-R.I., ranking member of the Subcommittee on Emerging Threats and Capabilities, identified a leading problem facing national cybersecurity today: As technology continues to improve, the networks that need to be protected are only becoming more complicated.

Speaking at the Institute for Critical Infrastructure Technology Forum on June 7 in Washington, D.C., Langevin explained that traditionally, patching vulnerabilities typically involves modifying software with some code changes. Yet, when the vulnerability is a trained, machine-learning behavior, how does it get patched?

“That’s the problem,” he told the forum audience. Langevin understands the complexity of the topic as co-founder and co-chair of the Congressional Cybersecurity Caucus. Industry and policy leaders need to find a way to encourage innovation and security while keeping pace with the innovation of technology.

Artificial Intelligence

CXO Tech Forum

According to Langevin, approaching these challenges consists of three major components:

  1. Ensuring the security of new devices: The security of internet of things should be dealt with the same techniques proven successful with smartphones and desktop computers, meaning extending practices like automatic patching and encryption to all devices. In this case, the government should work with the private industry to develop guidelines for upgrading the patching of connected devices and standards for enabling them. The guidelines, combined with informing consumers, can help tech providers meet security patches. 

  2. An increase in shared situational awareness: A shared knowledge of attacks across the public and private sector can improve vulnerability, and build a more complete picture of the entire threat environment. Congress took steps to improve this through the Cybersecurity Act of 2015, which requires the director of national intelligence and the departments of Homeland Security, Defense and Justice to create procedures to share cybersecurity threat information with private entities. “So if we can collect, aggregate and also disseminate intelligence from these types of companies … I believe we could more quickly respond to cyberattacks,” Langevin said.

  3. Continuing to build an appropriate response: Detecting and protecting against cyber threats requires both private industry and government to respond fittingly when an attack does occur, especially to nation-state actors. “I believe stop saying how a responsible country should behave, and start behaving that way,” Langevin said. This includes holding other countries more accountable for malicious cyber behavior or structured state-sponsored attacks, and continuing to set the standard for cybersecurity tolerance and rules.

Langevin believes establishing a policy framework around these three areas can continue to advance the nation in the cyberspace, in spite of the increasing threats faced today.

NIST Information Security Compliance and DevOps

The goal of this article is to consider how an organization can adopt DevOps methods for delivering software, while also remaining compliant with Federal Information Security Management Act-mandated information security requirements based on National Institute of Standards and Technology guidance.

For many organizations subject to these requirements, implementing and assessing NIST-recommended security controls in order to achieve an Authorization to Operate for new and existing systems has been a bottleneck to releasing software. It may be hard for these organizations to envision using DevOps to increase the frequency of releases while simultaneously complying with information security requirements.

However, NIST guidance envisions organizations using iterative software delivery methods, and has described methods that can support an iterative software development process with frequent releases. These include automated continuous monitoring and ongoing authorization to replace the traditional tri-annual big bang certification and authorization of software.

Artificial Intelligence

CXO Tech Forum

Using the NIST Risk Management Framework and implementing NIST security controls need not impede a DevOps organization. On the contrary, it should be possible to even more effectively secure information and systems using both.

DevOps in Brief

DevOps is a portmanteau of Development and Operations, and the goal of DevOps is to break down the traditional separation between these two organizational units. Traditionally, the development unit creates and tests new or modified code, which they hand off to the operations unit to deploy onto the servers they maintain and operate. This has often resulted in a bottleneck, especially if the development side has adopted an agile and iterative approach to releasing software updates while the operations side has resisted more frequent production releases because of perceived risk.

DevOps techniques such as involving operations staff from the start of development, automating tests, using development and test environments that resemble production as much as possible, and releasing changes in small batches aim to reduce the risks associated with production releases. The foundation of DevOps is the deployment pipeline, which is a set of tools that coordinate and automate most of the code building, testing and deployment steps and is therefore critical for supporting more production releases. By automating deployment steps, operations staff can avoid tedious and error-prone manual software installation and environment configuration steps.

NIST Risk Management Framework in Brief

FISMA mandates compliance with the NIST Federal Information Processing Standards. NIST FIPS-199 and FIPS-200 require categorizing information systems based on their impact level (Low, Moderate and High), as determined by the information types used by the systems, and implementing the recommended baseline controls for those levels using the security controls catalog found in NIST Special Publication SP 800-53.

The NIST SP 800-37, titled “Guide for Applying the Risk Management Framework to Federal Information Systems - A Security Life Cycle Approach, "also known more simply as the “RMF,” describes a six-step process for securing information systems. The steps are: Categorize, Select, Implement, Assess, Authorize and Monitor.

As the full RMF title implies, it is meant to apply to the entire development lifecycle, which it defines as including initiation, development, implementation, operation/maintenance and disposition. Although this sounds similar to the standard waterfall process, the RMF states it can be used with any lifecycle process, including water, spiral and agile.

For example, in the supplemental guidance for the Security Control Assessment task of step 2, the RMF states that “when iterative development processes such as Agile development are employed, this typically results in an iterative assessment as each cycle is conducted." This suggests assessing controls as they are implemented, within each iteration, rather than waiting until development of the product is done. The following quote from the RMF is lengthy, but is useful for understanding NIST’s conception of ongoing authorization:

“Authorization termination dates are influenced by federal and/or organizational policies which may establish maximum authorization periods. Organizations may choose to eliminate the authorization termination date if the continuous monitoring program is sufficiently robust to provide the authorizing official with the needed information to conduct ongoing risk determination and risk acceptance activities with regard to the security state of the information system and the ongoing effectiveness of security controls employed within and inherited by the system.”

With regards to DevOps, if assessments are completed for security controls impacted during development of each release within each iteration, and sufficient continuous monitoring is in place for the production version of the application and its environment, then it should be possible to maintain an ongoing authorization for the product without the need to reassess every control for every release. NIST has provided a paper, "Supplemental Guidance on Ongoing Authorization: Transitioning to Near Real-Time Risk Management," for organizations interested in implementing this approach.

Potential Issues in Implementing Security Controls in DevOps Environment

DevOps shows a lot of promise in both increasing benefits to customers by more frequent releases of new functionality while actually reducing the risk of each release. Its goal is to make the release process a low-stress event that can be carried out at any time. However, there other units in most organizations that have a stake in IT, other than development and operations, most notably quality assurance and information security.

InfoSec in particular may be averse to frequent releases because of concerns that they may lead to undetected vulnerabilities making it into production. The job of information security in organizations bound by FISMA and NIST requirements is to ensure that the owners of information systems have managed risk by applying the SP 800-53 controls and that an authorizing official has explicitly accepted the remaining risk. The RMF requires the authorization for a security system to operate in production be based on an assessment of its security controls and verification that they are properly implemented and operating as intended. The authorization step is often difficult for development teams to pass. Some reasons for this difficulty include:

  • Leaving security assessments until later in the development process leads to discovery of vulnerabilities that could have been avoided or more easily corrected earlier in the process.
  • Not including security control requirements from the start of planning and designing a new system leads to costly changes later in development or failure to achieve an ATO.
  • Lack of communication and coordination between information security teams and the development and operations teams can lead to adversarial relationship if the teams blame each other for security vulnerabilities.
  • Manual security control testing is too slow for the DevOps goal to create more frequent releases.

Techniques and Tools for Overcoming Issues

Organizations seeking to establish a DevOps process that can overcome the difficulties listed above should consider the following techniques and tools to address them:

  • Start assessments and vulnerability scans from the earliest development cycles, rather than leaving them to just prior to release. Scan in all environments including development, QA, pre-production and production.
  • Categorize the system and select security controls before beginning design and development iterations so that they are included in the overall system requirements rather than added on.
  • Dev and Ops are together, so InfoSec needs to join also (as well as QA). Consider embedding a security analyst with the team to identify requirements and develop tests.
  • There are NIST-approved tools for assessing security controls based on the Security Content Automation Protocol protocol. Developers should create automated tests for controls implemented within the system and should brainstorm ways to automate assessing controls outside of the system, such as security documentation. NIST suggests using the Open Checklist Interactive Language for standardizing the collection of data for assessments that cannot be automated.

DHS Applies R&D Innovation to its Cybersecurity Challenges

Today’s frequency of adversarial disruptive cyber attacks has the Homeland Security Department focused on innovative research and development models. For the Customs and Border Protection, network security is the core if its overall mission. 

“Being a CISO, we spend a whole lot of time preparing for something that we really, really hope never comes,” said CBP Chief Information Security Officer Alma Cole at the DHS Science and Technology Cyber Security Showcase and Technical Workshop on July 12 in Washington, D.C.

DHS’s S&T Directorate established the Cyber Security Division to improve the security of the nation’s information infrastructure and networks. One way CSD is doing this is by coordinating R&D in DHS among the R&D community, with department customers, government agencies, international partners and the private sector.

Artificial Intelligence

CXO Tech Forum

For CBP, security is critical for assuring the systems its front officers rely on to defend the country are available and running, despite adversarial attacks. Yet, Cole said, CBP is challenged with strengthening security as it transitions to cloud technology and away from its parameter defense model.

“We’re trying to do less with less,” Cole explained. “We have, oftentimes, less resources, however we really do need to start making better decisions about how we do security and how we manage our risk.”

The cloud transition means refocusing on authentication, encryption, integrity validation and building security into the cloud development pipeline. All of which, Cole said, requires more digital research and knowledge.

“As we develop applications and publish them into the cloud and put them into containers, once we get that down, we know that the baselines are solid and we can monitor the integrity of those baselines to ensure that nothing actually happens to them,” he said.

The baselines need to be ready for the next build so new applications work well with everything else already in the network. This way, CBP can push out new applications or replace previous ones rather than patching and morphing in its current environment.

“That creates lots and lots of risk and lots of unknowns,” Cole said.

CBP is also challenged with its air-gap networks, which are computers and networks not directly connected to the internet or to any other connected devices. In CBP, these are things like cameras and sensors, which did not require as severe security controls because they weren’t connected.

“That assumption’s not holding true anymore,” Cole said. “More and more, we’re looking to connect those air-gapped systems.”

CBP needs to bring those legacy technologies onto a more advanced, connected network, and find the most effective way to shield everything else from those assets. This can be done with a zero-trust model for cybersecurity (developed by Forrester), which assumes all traffic is untrusted. This way, security is built into the DNA of IT architecture through situational awareness, and vulnerability and incident management capabilities.

DHS also relies on continuous diagnostic and mitigation for enhanced visibility and governance during a cloud transition. It helps shed light on dark spots in security so organizations can better understand everything in its network and how it's all interacting. Ultimately, CBP wants to ensure it is engineering solutions that by design recover or reengineer if something goes wrong.

“That sort of assumption is designed into the solution so we can continue to run despite having security issues or intrusions or problems,” Cole said.

Cole is also tackling the technology portfolio rationalization. Many security tools are brought in to do one thing, and may not even be used properly for that task because it hasn’t been configured correctly or requires additional tools.

“What we really are trying to do is reduce the number of technologies that we have to manage, and then try to get those into a very, very mature state,” Cole said, in order to get all the value out of them. Then, CBP can focus on properly defending what is inside its network, rather than trying to be an “inch wide and a mile deep with everything in touch with security and security monitoring.”

This requires cooperation from the R&D community. While the solution doesn’t need to be a single product, Cole believes there needs to be better integration. If a single tool has a valuable function for CBP but doesn’t work well with anything else in the environment or requires separate management, this creates further challenges. However, if the R&D community can bring something in that works with the other technologies, is integrated, doesn’t require a separate team or console for management, and can work seamlessly with the other tools CBP is currently maturing, it will benefit the department greatly.

101 Constitution Ave NW, Suite 100 West Washington, DC 20001

(c) 2017 GovernmentCIOMagazine. All Rights Reserved.