DDoS Protection is the Foundation for Application, Site and Data Availability

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Source: https://securityboulevard.com/2018/09/ddos-protection-is-the-foundation-for-application-site-and-data-availability/

Loss of Customer Trust and Confidence Biggest Consequence of DDoS Attacks

A new study from Corero Network Security has revealed that the most damaging consequence of a distributed denial-of-service (DDoS) attack for a business is the erosion of customer trust and confidence.

The firm surveyed IT security professionals at this year’s Infosecurity Europe, with almost half (42%) of respondents stating loss of customer trust and confidence as the worst effect of suffering DDoS, with just 26% citing data theft as the most damaging.

Third most popular among those polled was potential revenue loss (13%), followed by the threat of intellectual property theft (10%).

“Network and web services availability are crucial to ensuring customer satisfaction and sustaining customer trust and confidence in a brand,” said Ashley Stephenson, CEO at Corero Network Security. “These indicators are vital to both the retention and acquisition of customers in highly competitive markets. When an end user is denied access to internet-facing applications or network outages degrade their experience, it immediately impacts brand reputation.”

Corero’s findings come at a time when DDoS attacks continue to cause havoc for organizations around the world.

Link11’s Distributed Denial of Service Report for Europe revealed that DDoS attacks remained at a high level during Q2 2018, with attackers focusing on European targets 9,325 times during the period of April-June. That equated to an average of 102 attacks per day.

“The cyber-threat landscape has become increasingly sophisticated and companies remain vulnerable to DDoS because many traditional security infrastructure products, such as firewalls and IPS, are not sufficient to mitigate modern attacks,” added Corero’s Stephenson. “Proactive DDoS protection is a critical element in proper cybersecurity protection against loss of service and the potential for advanced, multi-modal attack strategies.”

“With our digital economy utterly dependent upon access to the internet, organizations should think carefully about taking steps to proactively protect business continuity, particularly including DDoS mitigation.”

Source: https://www.infosecurity-magazine.com/news/loss-trust-confidence-ddos/

How to Improve Website Resilience for DDoS Attacks – Part II – Caching

In the first post of this series, we talked about the practices that will optimize your site and increase your website’s resilience to DDoS attacks. Today, we are going to focus on caching best practices that can reduce the chances of a DDoS attack bringing down your site.

Website caching is a technique to store content in a ready-to-go state without the need (or with less) code processing. When a CDN is in place, cache stores the content in a server location closer to the visitor. It’s basically a point-in-time photograph of the content.

Caching

When a website is accessed, the server usually needs to compile the website code, display the end result to the visitor, and provide the visitor with all the website’s assets. This all takes a toll on your server resources, slowing down the total page load time. To avoid this overhead, it’s necessary to leverage certain types of caching whenever possible.

Caching not only will decrease load time indications, such as time to first byte (TTFB), it also saves your server resources.

Types of Caching

There are all sorts of caching types and strategies, but we won’t cover them all. In this article, we’ll approach three that we see most in practice.

Static Files

The first type is the simplest one, called static files caching.

Images, videos, CSS, JavaScript, and fonts should always be served from a content delivery network(CDN). These network providers operate thousands of servers, spread out across global data centers. This means they can deliver more data much faster than your server ever could on its own.

When using a CDN, the chances of your server suffering from bandwidth exhaustion attacks are minimal.

Your website will also be much faster given the fact that a large portion of website content is composed of static files, and they would be served by the CDN.

Page Caching

This is definitely the most powerful type of cache. The page caching will convert your dynamic website into HTML pages when possible, making the website a lot faster and decreasing the server resource usage.

A while ago, I wrote an article about Testing the Impacts of Website Caching Tools.

In that article, with the help of a simple caching plugin, the web server was able to provide 4 times more requests using ¼ of the server resources when compared to the test without the caching plugin.

However, as you may know not every page is “cacheable”. This leads us to the next type…

In-Memory Caching

By using a software such as Redis or Memcached, your website will be able to retrieve part of your database information straight from the server memory.

Using in-memory caching improves the response time of SQL queries. It also decreases the volume of read and write operations on the web server disk.

All kinds of websites should be able to leverage in-memory caching, but not every hosting provider supports it. Make sure your hosting does before trying to use such technology.

Conclusion

We highly recommend you to use caching wisely in order to spare your server bandwidth and to make your website work faster and better.

Or Website Application Firewall (WAF) provides a variety of caching options that can suit your website needs. It also works as a CDN, improving your website performance. Not only do we protect your website from DDoS attacks, but we also make it up to 90% faster with our WAF.

We are still planning to cover other best practices about how to improve website resilience for DDoS attacks in other posts. Subscribe to our email feed and don’t miss our educational content based on research from our website security team.

Source: https://securityboulevard.com/2018/08/how-to-improve-website-resilience-for-ddos-attacks-part-ii-caching/

FCC Admits It Lied About the DDoS Attack During Net Neutrality Comment Process – Ajit Pai Blames Obama

During the time the Federal Communications Commission (FCC) was taking public comments ahead of the rollback of net neutrality rules, the agency had claimed its comments system was knocked offline by distributed denial-of-service (DDoS) attacks.

These attacks were used to question the credibility of the comment process, where millions of Americans had voiced against the net neutrality rollback. The Commission then chose to ignore the public comments altogether.

FCC now admits it’s been lying about these attacks all this time

No one bought the FCC’s claims that its comment system was targeted by hackers during the net neutrality comment process. Investigators have today validated those suspicions revealing that there is no evidence to support the claims of DDoS attacks in 2017. Following the investigation that was carried out after lawmakers and journalists pushed the agency to share the evidence of these attacks, the FCC Chairman Ajit Pai has today released a statement, admitting that there was no DDoS attack.

This statement would have been surprising coming from Pai – an ex-Verizon employee who has continued to disregard public comments, stonewall journalists’ requests for data, and ignore lawmakers’ questions – if he hadn’t thrown the CIO under the bus, taking no responsibility whatsoever for the lies. In his statement, Pai blamed the former CIO and the Obama administration for providing “inaccurate information about this incident to me, my office, Congress, and the American people.”

He went on to say that the CIO’s subordinates were scared of disagreeing with him and never approached Pai. If all of that is indeed true, the Chairman hasn’t clarified why he wouldn’t demand to see the evidence despite everyone out of the agency already believing that the DDoS claim was nothing but a lie to invalidate the comment process.

“It has become clear that in addition to a flawed comment system, we inherited from the prior Administration a culture in which many members of the Commission’s career IT staff were hesitant to express disagreement with the Commission’s former CIO in front of FCC management. Thankfully, I believe that this situation has improved over the course of the last year. But in the wake of this report, we will make it clear that those working on information technology at the Commission are encouraged to speak up if they believe that inaccurate information is being provided to the Commission’s leadership.”

The statement comes as the result of an independent investigation by the Government Accountability Office that is to be published soon. However, looking at Pai’s statement it is clear what this report is going to say.

As a reminder, the current FCC leadership didn’t only concoct this story of the DDoS attack. It had also tried to bolster its false claims by suggesting that this wasn’t the first such incident as the FCC had suffered a similar attack in 2014 under the former chairman Tom Wheeler. It had also tried to claim that Wheeler had lied about the true nature of the attack back in 2014 to save the agency from embarrassment. The former Chairman then went on record to call on Pai’s FCC for lying to the public as there was no cyberattack under his leadership.

Pai throws CIO under the bus; takes no responsibility

And now it appears the FCC was also lying about the true nature of the failure of comment system in 2017. In his statement released today, Pai is once again blaming [PDF] the Obama administration for feeding him inaccurate information.

I am deeply disappointed that the FCC’s former [CIO], who was hired by the prior Administration and is no longer with the Commission, provided inaccurate information about this incident to me, my office, Congress, and the American people. This is completely unacceptable. I’m also disappointed that some working under the former CIO apparently either disagreed with the information that he was presenting or had questions about it, yet didn’t feel comfortable communicating their concerns to me or my office.

It remains unclear why the new team that replaced Bray nearly a year ago didn’t debunk what is being called a “conspiracy theory” and came clean about it.

Some redacted emails received through the Freedom of Information Act (FOIA) by the American Oversight had previously revealed that the false theory around 2014 cyberattack in order to justify 2017 attack also appeared in a draft copy of a blog post written on behalf of Pai. That draft was never published online to keep Pai’s hands clean since there was no evidence to support FCC’s claims of a malicious attack. These details were then instead sent out to media through which this narrative was publicized.

“The Inspector General Report tells us what we knew all along: the FCC’s claim that it was the victim of a DDoS attack during the net neutrality proceeding is bogus,” FCC Commissioner Jessica Rosenworce wrote. “What happened instead is obvious – millions of Americans overwhelmed our online system because they wanted to tell us how important internet openness is to them and how distressed they were to see the FCC roll back their rights. It’s unfortunate that this agency’s energy and resources needed to be spent debunking this implausible claim.”

Source: https://wccftech.com/fcc-admits-lied-ddos-ajit-pai-obama/

GDPR: A tool for your enemies?st

Every employee at your organisation should be prepared to deal with right to be forgotten requests.

It’s estimated that 75% of employees will exercise their right to erasure now GDPR (General Data Protection Regulation) has come into effect. However, less than half of organisations believe that they would be able to handle a ‘right to be forgotten’ (RTBF) request without any impact on day-to-day business.

These findings highlight the underlying issues we’re seeing in the post-GDPR era and how the new regulations put businesses at risk of being non-compliant. What is also worrying, is that there are wider repercussions for organisations not being prepared to handle RTBF requests.

No matter how well business is conducted, there is always the possibility of someone who holds a grudge against the company and wants to cause disruption to daily operations. One way to do this, without resorting to a standard cyber-attack, is through inundating an organisation with RTBF requests. Especially when the company struggles to complete one request, this can drain a company’s resources and grind the business to a halt. In addition to this, failing to comply with the requests in a timely manner can result in a non-compliance issue – a double whammy.

An unfortunate consequence of the new GDPR regulations is that the right to erasure is free to submit, meaning it is more likely customers or those with a grudge will request to have their data removed. There are two ways this can be requested. The first is a simple opt-out, to remove the name – usually an email address – from marketing campaigns. The other is a more time consuming, complex discovery and removal of all applicable data. It is this second type of request where there is a potential for hacktivists, be-grudged customers, or other cyber-attackers to weaponise the regulation requirement.

One RTBF request is relatively easy to handle – as long as the company knows where its data is stored of course – and the organisation actually has a month to complete the request from the day it was received. However, if a company is inundated with requests coming in on the same or consecutive days, it becomes difficult to manage and has the potential to heavily impact daily operations. This kind of attack is comparable to Distributed Denial of Service (DDoS) attacks – for example the attack on the UK National Lottery last year which saw its entire online and mobile capabilities knocked out for hours because cyber criminals flooded the site with traffic – with companies becoming overloaded with so many requests that it has to stop their services entirely.

When preparing for a flood of RTBF requests, it is essential that all organisations have a plan in place that streamlines processes for discovery and deletion of customer data, making it as easy as possible to complete multiple requests simultaneously.

Don’t let your weakest link be your downfall

The first thing to consider is whether or not the workforce is actually aware of what to do should a RTBF request come in (let alone hundreds). Educating all employees on what to do should a request be made – including who in the company to notify and how to respond to the request – is essential in guaranteeing an organisation is prepared. It will mean that any RTBF request is dealt with both correctly and in a timely manner. The process must also have clearly defined responsibilities and actions able to be audited. For companies with a DPO (Data Protection Officer) or someone who fulfils that role, this is the place to begin this process.

Discovering data is the best defence

The key to efficiency in responding to RTBF requests is discovering the data. This means the team responsible for the completion of requests is fully aware of where all the data for the organisation is stored. Therefore, a complete list of where the data can be found – and how to find it – is crucial. While data in structured storage such as a database or email is relatively simple to locate and action, it is the unstructured data, such as reports and files, which is difficult to find and is the biggest culprit of draining time and resources.

Running a ‘data discovery’ exercise is invaluable in helping organisations achieve an awareness of where data is located, as it finds data on every system and device from laptops and workstations to servers and cloud drives. Only when you know where all critical data is located, can a team assess its ability to delete it and, where applicable, remove all traces of a customer. Repeating the exercise will highlight any gaps and help indicate where additional tools may be required to address the request. Data-At-Rest scanning is frequently found as one part of a Data Loss Prevention (DLP) solution.

Stray data – a ticking time bomb

Knowing where data is stored within the organisation isn’t the end of the journey however. The constant sharing of information with partners and suppliers also has to be taken into account – and for this, understanding the data flow into and out of the company is important. Shared responsibility clauses within GDPR rules means that all partners involved with critical data are liable should a breach happen or a RTBF request cannot be completed. If critical data sitting with a partner is not tracked by the company that received the RTBF request, it makes it impossible to truly complete it and the organisation could face fines of up to 20 million EUR (or 4% of their global turnover). Therefore, it’s even more important to know how and where critical data is moving at all times, minimising the sharing of information to only those who really need to know.

While there is no silver bullet to prevent stray data, there are a number of technologies which can help to control the data which is sent both in and out of a company. Implementing automated solutions, such as Adaptive Redaction and document sanitisation, will ensure that no recipient receives unauthorised critical data. This will build a level of confidence around the security of critical data for both the organisation and the customer.

With the proper processes and technologies in place, dealing with RTBF requests is a straightforward process, whether it is a legitimate request, or an attempt by hacktivists or disgruntled customers to wreak havoc on an organisation. Streamlining data discovery processes and controlling the data flowing in and out of the company will be integral in allowing a business to complete a RTBF request and ultimately defend the organisation against a malicious use of GDPR.

Source: https://www.itproportal.com/features/gdpr-a-tool-for-your-enemies/

Hackers replacing volumetric DDoS attacks with “low and slow” attacks

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures.

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures. A report by CDNetworks in October revealed that more than half of all organisations had ended up as victims of DDoS attacks that regularly took their website, network or online apps down.
To deter cyber-criminals from launching powerful DDoS attacks, organisations began pouring in huge investments to shore up their defences against DDoS attacks. According to CDNetworks, average annual spending on DDoS mitigation in the UK rose to £24,200 last year, with 20 percent of all businesses investing more than £40,000 in the period.
Such investments also resulted in increased confidence amongst businesses in defending against business continuity threats such as DDoS attacks, but unfortunately, increased investments did little to stop the flow of such attacks. Kaspersky Lab’s Global IT Security Risks Survey 2017 noted that the number of DDoS attacks on UK firms doubled since 2016, affecting 33 percent of all firms.
An analysis of DDoS attacks published by Alex Cruz Farmer, security product manager at Cloudflare, has revealed that while organisations in the UK have certainly upped their spending on DDoS mitigation, cyber-criminals are now responding by switching to Layer 7 based DDoS attacks which impact applications and the end-user while ignoring traditional Layer 3 and 4 attacks whose effectiveness is no longer guaranteed. This has ensured the unabated continuance of DDoS attacks on enterprises.
“The key difference to these (Layer 7) attacks is they are no longer focused on using huge payloads (volumetric attacks), but based on Requests per Second to exhaust server resources (CPU, Disk and Memory),” he said, adding that by their very nature, Layer 7 based DDoS attacks, such as credential stuffing and content scraping, do not last too long and do not flood networks with hundreds of gigabytes of junk network traffic per second like traditional DDoS attacks.
Farmer added that Layer 7 based DDoS attacks have become so popular among hackers that Cloudflare detected around 160 attacks occurring each day, with some days spiking up to over 1000 attacks. For example, hackers are frequently carrying out enumeration attacks by identifying expensive operations in apps and hammering at them with bots to tie up resources and slow down or crash such apps. For instance, a database platform was targeted with over 100,000,000 bad requests in just 6 hours!
Indeed, the first signs of short duration yet persistent DDoS attacks were observed in May last year. Imperva Incapsula’s Global DDoS Threat Landscape Report, which analysed more than 17,000 network and application layer DDoS attacks, concluded that 80 percent of DDoS attacks lasted less than an hour, occurred in bursts, and three-quarters of targets suffered repeat assaults, in which 19 percent were attacked 10 times or more.
“These attacks are a sign of the times; launching a DDoS assault has become as simple as downloading an attack script or paying a few dollars for a DDoS-for-hire service. Using these, non-professionals can take a website offline over a personal grievance or just as an act of cyber-vandalism in what is essentially a form of internet trolling,” said Igal Zeifman, Incapsula security evangelist at Imperva to SC Media UK.
Sean Newman, director of Corero Network Security told SC Media UK that reports of increasing application layer DDoS attacks are only to be expected, as attackers continue to look for alternate vectors to meet their objectives.
“A perception that volumetric DDoS attacks are on the decline, is understandable, especially if that is your only lens on the problem.  However, when your view is based on having deployed the latest generation of always-on, real-time, DDoS protection, you will find a rather different story.
““With this lens on the problem, you will find that there is a significantly increasing trend for smaller, more calculated, volumetric DDoS attacks. In fact, Corero customers saw in increase in volumetric attacks of 50 percent compared to a year ago, with over 90 percent of those attacks being less than 5Gbps in size and over 70 percent lasting less than 10 minutes in duration,” he added.
According to Joseph Carson, chief security scientist at Thycotic, organisations are adopting various mitigation techniques to defend against targeted and repeated DDoS attacks, but many a times, such technologies also consume a lot of bandwidth and system memory and thereby interfere with smooth functioning of databases and apps.
“A Target DDoS attack is something that is very challenging to mitigate against though luckily they are periodic meaning as they occur for a short amount of time usually from days to a few weeks. Techniques that are commonly used today are mitigation techniques using Access Control Lists, Rate Limiting and filtering source IP Addresses, though each of these are resource intensive and can prevent legitimate users from getting access to your services.
“A few important lessons can be learned from Estonia’s DDoS experience back in 2007, be very careful as to what mitigation techniques you use as some companies’ responses can be more costly than the DDoS attack itself so always respond to each attack with the appropriate mitigation response.
“Though the best way to really defend and protect against future DDoS attacks is to think in terms of geographic distribution and not have any centrally dependent location of service. Estonia learned this in 2007 and has now distributed itself beyond its own country’s borders using Data Embassies,” he added.
Source: https://www.scmagazineuk.com/hackers-replacing-volumetric-ddos-attacks-with-low-and-slow-attacks/article/767988/

Danish Railway Company DSB Suffers DDoS Attack

Danish rail travelers found buying a ticket difficult yesterday, following a DDoS attack on the railway company DSB.

DSB has more than 195 million passengers every year but, as reported by The Copenhagen Post, the attack on Sunday made it impossible for customers to purchase a ticket via the DSB app, on the website, at ticket machines and certain kiosks at stations – though passengers were able to buy tickets from staff on trains.

“We have all of our experts on the case,” said DSB spokesperson Aske Wieth-Knudsen, with all systems apparently working as normal this morning.

“The DDoS attack seen in Denmark this weekend on critical national infrastructure is precisely the type of attack that EU Governments are seeking to protect citizens against with last week’s introduction of the Network and Information Systems Directive (NIS),” said Andrew Lloyd, president, Corero Network Security.

“Keeping the control systems (e.g. railway signaling, power circuits and track movements) secure greatly reduces the risk of a catastrophic outcome that risks public safety. That said, a successful attack on the more vulnerable management systems can cause widespread disruption. This DDoS attack on Danish railways ticketing site can be added to a growing list of such cyber-attacks that include last October’s DDoS attack on the Swedish Railways that took out their train ordering system for two days resulting in travel chaos.

The lessons are clear, Lloyd added; transportation companies and other operators of essential services have to invest in proactive cybersecurity defenses to ensure that their services can stay online and open for business during a cyber-attack.

Source: https://www.infosecurity-magazine.com/news/danish-railway-ddos-attack/

DDoS Attacks Ebb and Flow After Webstresser Takedown

Shortly after Infosecurity Magazine reported that administrators of the world’s largest DDoS-as-a-service website had been arrested, Link11 wrote a blog post, concluding that “In the short period of time since that date, the Link11 Security Operation Center (LSOC) has seen a roughly 60% decline in DDoS attacks on targets in Europe.”

The reported deduction differs significantly from the findings of Corero Network Security. President Andrew Lloyd questioned the conclusions drawn by Link11, saying, “Our own evidence is that attack volumes globally and in Europe have, if anything, increased in the week since the Europol take-down action.”

In stark contrast to the LSOC findings, Corero noticed a spike in distributed denial-of-service (DDoS) attacks around 17 April but said, “Since then, European attacks have remained higher in the second half of the month versus the first half of April and the year as a whole.”

The news that law enforcement agencies had closed down Webstresser.org was a big win for cybercrime fighters. “But even so, the number of attacks will only decrease temporarily,” said Onur Cengiz, head of the Link11 security operation center. “Experience has shown in recent years that for every DDoS attack marketplace taken out, multiple new platforms will pop up like the heads of a hydra.”

A Kaspersky Lab study released on 26 April, on the heels of the Webstreser takedown, gives evidence that supports the changing tides of DDoS attack types and the ebb and flow of attacks Cengiz’s alluded to in his statement.

According to the Kaspersky Lab DDoS report, Q1 revealed an increased number of DDoS attacks and targets, but there are distinctions among the different attack methods. “Amplified” attacks were beginning to wane but had a bit of a boost in momentum, while network time protocol (NTP) and DNS-based boosting had almost disappeared after most vulnerable services were patched.

DDoS attacks as a means of personal revenge grew more popular in Q1 2018. Also trending were Memcached attacks that resemble a typical DDoS attack; however, according to the Kaspersky report, “Cybercriminals will likely seek out other non-standard amplification methods besides Memcached.”

As server owners patch vulnerabilities, there will be dips in certain types of attacks. “That being the case, DDoS masterminds will likely seek out other amplification methods, one of which could be LDAP services,” the Kaspersky report authors wrote.

Source: https://www.infosecurity-magazine.com/news/ddos-attacks-ebb-flow-after/

Why DDoS Just Won’t Die

Distributed denial-of-service attacks are getting bigger, badder, and ‘blended.’ What you can (and can’t) do about that.

Most every organization has been affected by a distributed denial-of-service (DDoS) attack in some way: whether they were hit directly in a traffic-flooding attack, or if they suffered the fallout from one of their partners or suppliers getting victimized.

While DDoS carries less of a stigma than a data breach in the scheme of security threats, a powerful flooding attack can not only take down a company’s network, but also its business. DDoS attacks traditionally have been employed either to merely disrupt the targeted organization, or as a cover for a more nefarious attack to spy on or steal data from an organization.

The April takedown by the UK National Crime Agency and Dutch National Police and other officials of the world’s largest online market for selling and launching DDoS attacks, Webstresser, was a big win for law enforcement. Webstresser boasted more than 136,000 registered users and supported some four million DDoS attacks worldwide.

But in the end, Webstresser’s demise isn’t likely to make much of a dent in DDoS attack activity, experts say. Despite reports that the takedown led to a significant decline in DDoS attacks, Corero Network Security saw DDoS attacks actually rise on average in the second half of the month of April. “Our own evidence is that attack volumes globally and in Europe have, if anything, increased in the week since the Europol take-down action,” said Andrew Lloyd, president of Corero.

Even without a mega DDoS service, it’s still inexpensive to wage a DDoS attack. According to Symantec, DDoS bot software starts as low as a dollar to $15, and less than one-hour of a DDoS via a service can go from $5 to $20; a longer attack (more than 24 hours) against a more protected target, costs anywhere from $10 to $100.

And bots are becoming even easier to amass and in bigger numbers, as Internet of Things (IoT) devices are getting added to the arsenal. According to the Spamhaus Botnet Threat Report, the number of IoT botnet controllers more than doubled last year. Think Mirai, the IoT botnet that in October of 2016 took down managed DNS provider Dyn, taking with it big names like Amazon, Netflix, Twitter, Github, Okta, and Yelp – with an army of 100,000 IoT bots.

Scott Tierney, director of cyber intelligence at Infoblox, says botnets increasingly will be comprised of both traditional endpoints—Windows PCs and laptops—as well as IoT devices. “They are going to be blended,” he said in an interview. “It’s going to be harder to tell the difference” in bots.

The wave of consumer products with IP connections without software or firmware update capabilities will exacerbate the botnet problem, according to Tierney.

While IoT botnets appear to be the thing of the future, some attackers have been waging old-school DDoS attacks: in the first quarter of this year, a long-tail DDoS attack lasted more than 12 days, according to new Kaspersky Lab research. That type of longevity for a DDoS was last seen in 2015.

Hardcore heavy DDoS attacks have been breaking records of late: the DDoS attack on Github recently, clocked at 1.35 terabytes, was broken a week later by a 1.7TB DDoS that abused the Memcached vulnerability against an undisclosed US service provider. “That Github [DDoS] record didn’t even last a week,” Tierney said in a presentation at Interop ITX in Las Vegas last week.

The DDoS attack employed Memcached servers exposed on the public Internet. Memcached, an open-source memory-caching system for storing data in RAM for speeding access times, doesn’t include an authentication feature, so attackers were able to spoof requests and amplify their attack. If properly configured, a Memcached server sits behind firewalls or inside an organization.

“Memcached amplification attacks are just the beginning” of these jacked-up attacks, Tierney said. “Be ready for multi-vector attacks. Rate-limiting is good, but alone it’s not enough. Get ready for scales of 900Mbps to 400Gbps to over a Terabyte.”

Tierney recommended ways to prepare for a DDoS attack, including:

  • Establish a security policy, including how you’ll enact and enforce it
  • Track issues that are security risks
  • Enact a business continuity/disaster recovery plan
  • Employ good security hygiene
  • Create an incident response plan that operates hand-in-hand with a business continuity/disaster recovery plan
  • Have a multi-pronged response plan, so that while you’re being DDoSed, your data isn’t also getting stolen in the background
  • Execute tabletop attack exercises
  • Hire external penetration tests
  • Conduct user security awareness and training
  • Change all factory-default passwords in devices
  • Know your supply chain and any potential risks they bring
  • Use DDoS traffic scrubbers, DDoS mitigation services

Source: https://www.darkreading.com/endpoint/privacy/why-ddos-just-wont-die/d/d-id/1331734

What Security Risks Should MSPs Expect in 2018

As IT operations are becoming more complex and require both advanced infrastructure and security expertise to increase the overall security posture of the organization, the managed service provider (MSP) industry is gaining more traction and popularity.

Estimated to grow from USD $152.45 billion in 2017 to USD $257.84 billion by 2022, at a CAGR of 11.1%, the MSP industry offers greater scalability and agility to organizations that have budget constraints and opt for a cloud-based IT deployment model.

“The cloud-based technology is the fastest-growing deployment type in the managed services market and is expected to grow at the highest CAGR during the forecast period from 2017 to 2022,” according to ResearchandMarkets. “IT budget constraints for installation and implementation of required hardware and software, limited IT support to manage and support managed services, and need for greater scalability are major factors that are likely to drive the adoption of cloud managed services in the coming years. The cloud-based deployment model offers higher agility than the on-premises deployment model.”

However, MSPs are expected to also become more targeted by threat actors than in the past. Supply chain attacks are becoming a common practice, as large organizations have stronger perimeter defenses that increase the cost of attack, turning MSPs into “low-hanging fruit”
that could provide access into infrastructures belonging to more than one victim. In other words, MSPs hold the keys to the kingdom.

Since MSPs are expected to provide around-the-clock security monitoring, evaluation, and response to security alters, they also need to triage and only escalate resources when dealing with advanced threats.

1. Wormable military-grade cyber weapons

Leveraging leaked, zero-day vulnerabilities in either operating systems or commonly deployed applications, threat actors could make the WannaCry incident a common occurrence. As similarly-behaving threats spread across infrastructures around internet-connected endpoints – both physical and virtual – MSPs need to quickly react with adequate countermeasures to defend organizations.
While MSPs may not be directly targeted, their role in protecting organizations will become far more important as they’ll need to reduce reaction time to new critical threats to a bare minimum, on an ongoing basis. Consequently, network security and threat mitigation will become commonplace services for MSPs.

2. Next-Level Ransomware

The rise of polymorphism-as-a-service (PaaS) will trigger a new wave of ransomware samples that will make it even more difficult for security solutions to detect. Coupled with new encryption techniques, such as leveraging GPU power to expedite file encryption, ransomware will continue to plague organizations everywhere. Backup management and incident response that provides full data redundancy need to be at the core of MSP offerings when dealing with these new ransomware variants.

While traditional ransomware will cause serious incidents, threat actors might also hold companies at gunpoint by threatening to disrupt services with massive distributed-denial-of-service (DDoS) attacks performed by huge armies of IoT botnets.

3. OSX Malware

The popular belief that Apple’s operating system is immune to malware was recently put to the test by incidents such as the ransomware disseminating Transmission app and advanced remote access Trojans (RATs) that have been spying on victims for years. With Apple devices making their way into corporate infrastructures onto C-level’s desks, managing and securing them is no longer optional, but mandatory.

Security experts have started finding more advanced threats gunning for organizations that have specific MacOS components, meaning that during 2018 threat actors will continue down this alley. Regardless of company size, vertical, or infrastructure, MSPs need to factor in MacOS malware proliferation and prepare adequate security measures.

4. Virtualization-Aware Threats

Advanced malware has been endowed with virtualization-aware capabilities, making it not just difficult to identify and spot by traditional endpoint security solutions, but also highly effective when performing lateral movement in virtual infrastructures. MSPs need to identify and plan to deploy key security technologies that are not just designed from the ground up to defend virtual infrastructures, but also hypervisor-agnostic, offer complete visibility across infrastructures, and detect zero-day vulnerabilities.

Focusing on proactive security technologies for protecting virtual workloads against sophisticated attacks will help MSPs offer unique value to their services.

5. Supply Chain Attacks

MSPs could also become the target of attack for threat actors, which is why deploying strong perimeter defense on their end should also be a top priority. Having access and managing security aspects to remote infrastructures turns MSPs into likely candidates for advanced attacks. Either by directly targeting their infrastructure or by “poisoning” commonly-deployed tools, MSPs should treat the security of their own infrastructure with the utmost scrutiny.

Source: https://securityboulevard.com/2018/04/what-security-risks-should-msps-expect-in-2018/