Artificial Intelligence: Friend or Enemy of Cybersecurity?

///Artificial Intelligence: Friend or Enemy of Cybersecurity?

Security strategies must undergo a radical revolution. Tomorrow’s security devices will need to see and operate internally among them to recognize changes in the interconnected environments and thus automatically be able to anticipate risks, update and enforce policies.

Devices must have the ability to monitor and share critical information and synchronize their responses to detect threats.

Sounds very futuristic? Not really. A new technology that has recently grabbed attention lays the foundation for such an automation approach. This has been called Intent-Based Network Security (IBNS).

This technology provides extended visibility across the entire distributed network and enables integrated security solutions to automatically adapt to changes in network configurations and change needs with a synchronized response against threats.

These solutions can also dynamically divide network segments, isolate affected devices, and get rid of malware. Similarly, new security measures and countermeasures can be automatically upgraded as new devices, services, and workloads are moved or deployed to and from anywhere in the network and from devices to the cloud.

The tightly integrated automated security allows for a general response against threats far greater than the total of all individual security solutions that protect the network.

Artificial intelligence and machine learning have become significant allies for cybersecurity. Mechanical learning will be reinforced by devices packed with information from the Internet of Things and by predictive applications that help to safeguard the network. But securing those “things” and information, which are ready targets or entry points for cybercriminals, is a challenge in itself.

The quality of intelligence

One of the greatest challenges of using artificial intelligence and machine learning lies in the caliber of intelligence. Today, Intelligence against cyber threats is highly prone to false positives due to the volatile nature of IoT.

Threats can change in a matter of seconds; one device can be flushed out, infect the next and then re-emptied back into a full low latency cycle.

Improving the quality of intelligence against threats is extremely important as IT teams increasingly transfer control to artificial intelligence to perform work that they otherwise should do. This is an exercise in trust, and this is a unique challenge.

As an industry, we can not transfer total control to an automated device, but we need to balance operational control with essential execution that can be performed by the staff. These work relationships will really make artificial intelligence and machine learning applications for cyber defense really effective.

Because there is still a shortage of talent in cybersecurity, products and services must be developed with greater automation in order to correlate intelligence against threats and thus, determine the level of risk to synchronize a coordinated response automatically.

By the time managers try to tackle a problem on their own, it is too late, even causing a major problem or generating more work. This can be handled automatically, using a direct exchange of intelligence between detection and prevention products or with assisted mitigation, which is a combination of people and technology working together.

Automation also allows security teams to allocate more time to the business goals of the company, rather than spending time in the routine administration of cybersecurity.

In the future, artificial intelligence in cybersecurity will constantly adapt to the growth of the attack surface. Today, we are barely connecting points, sharing information and applying that information to systems.

People are making these complex decisions, which require a correlation of intelligence from humans. It is expected that in the coming years, a mature artificial intelligence system may be able to make complex decisions for itself.

What is not feasible is total automation; That is, transfer 100% of the control to the machines so that they make the decisions all the time. People and machines must work together.

The next generation of “conscious” malware will use artificial intelligence to behave like a human, perform reconnaissance activities, identify targets, choose attack methods, and intelligently evade detection systems.

Just as organizations can use artificial intelligence to improve their security posture, cybercriminals can also start using it to develop smarter malware.

It guided by offensive intelligence set and analysis such as the types of devices deployed in the segment of a network, traffic flow, applications being used, transaction details or the time of day in which they occur.

The longer a threat remains within the network, the greater the ability to operate independently, to blend into the environment, to select tools based on the target platform, and eventually to take countermeasures based on the security tools found in the place.

This is precisely the reason why an approach is needed where security solutions for networks, accesses, devices, applications, data centers and cloud work together as an integrated and collaborative system.

Hits: 1380

By |2017-06-23T16:15:29+00:00June 23rd, 2017|Artifical Intelligence, Technology|2 Comments

2 Comments

  1. Ogeto Omwancha D. September 20, 2017 at 1:04 pm - Reply

    The evolution of artificial intelligence and machine learning has created possibilities that were previously inconceivable. Standardization is the need of the hour as we are surrounded by a variety of Internet connected products and a rapid development in digital technology and interconnected devices. Multiple industries like IT and aerospace rely on machine learning and artificial intelligence. Computing power, storage capabilities and data collection capacities are only enhanced with assistance from AI. Just as organizations can use artificial intelligence to enhance their security posture, cybercriminals may begin to use it to build smarter malware. This is precisely why a security fabric approach is needed — security solutions for network, endpoint, application, data center, cloud computing and access working together as an integrated and collaborative whole — combined with actionable intelligence to hold a strong position on autonomous security and automated defense. Ironically, the cure of artificially intelligent attack may lie in artificial intelligence itself. Artificial intelligence plays a critical role in cybersecurity – finding out new exploits and identifying the weakness with minimal human intervention. Using AI helps determine the incident response time and prevent hackers from penetrating basic firewalls themselves.

  2. Oswaldo Antonio December 22, 2017 at 6:40 pm - Reply

    The constant development of Artificial Intelligence is amazing, an amazing tool that can be used for as for good things as bad things. There are numerous example that show how big the potential of AI is for malicious objetives: “botnets” that anyone can buy and use, They simulate to be humans for skip over systems like CATCHAP or “chatbots”that cheats us for receive more information than we could give. This is just the begining, the future for this malicious things is as promising as the solutions for this big problem. The expert on this matter, Brian Krebs said once that putting chatbots with artificial intelligence on the hands of enterprises is dangerous because although this is an innovation that promises to be a helpfull tool for us, it can be used for apply social engineering and the game with information of the people. I think that it promises to be a great project, but it has to be used by the right people. Good article!

Leave A Comment